00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3924 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3519 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.023 using credential 00000000-0000-0000-0000-000000000002 00:00:00.025 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.061 Using shallow fetch with depth 1 00:00:00.061 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.061 > git --version # timeout=10 00:00:00.093 > git --version # 'git version 2.39.2' 00:00:00.093 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.145 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.145 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.264 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.276 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.287 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:02.287 > git config core.sparsecheckout # timeout=10 00:00:02.298 > git read-tree -mu HEAD # timeout=10 00:00:02.314 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:02.336 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:02.336 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:02.607 [Pipeline] Start of Pipeline 00:00:02.620 [Pipeline] library 00:00:02.621 Loading library shm_lib@master 00:00:02.621 Library shm_lib@master is cached. Copying from home. 00:00:02.631 [Pipeline] node 00:00:02.642 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.644 [Pipeline] { 00:00:02.652 [Pipeline] catchError 00:00:02.653 [Pipeline] { 00:00:02.660 [Pipeline] wrap 00:00:02.665 [Pipeline] { 00:00:02.670 [Pipeline] stage 00:00:02.671 [Pipeline] { (Prologue) 00:00:02.682 [Pipeline] echo 00:00:02.683 Node: VM-host-WFP7 00:00:02.686 [Pipeline] cleanWs 00:00:02.695 [WS-CLEANUP] Deleting project workspace... 00:00:02.695 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.702 [WS-CLEANUP] done 00:00:02.880 [Pipeline] setCustomBuildProperty 00:00:02.943 [Pipeline] httpRequest 00:00:03.346 [Pipeline] echo 00:00:03.347 Sorcerer 10.211.164.101 is alive 00:00:03.354 [Pipeline] retry 00:00:03.356 [Pipeline] { 00:00:03.365 [Pipeline] httpRequest 00:00:03.369 HttpMethod: GET 00:00:03.370 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.370 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.371 Response Code: HTTP/1.1 200 OK 00:00:03.372 Success: Status code 200 is in the accepted range: 200,404 00:00:03.372 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.518 [Pipeline] } 00:00:03.533 [Pipeline] // retry 00:00:03.540 [Pipeline] sh 00:00:03.823 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:03.837 [Pipeline] httpRequest 00:00:04.174 [Pipeline] echo 00:00:04.176 Sorcerer 10.211.164.101 is alive 00:00:04.184 [Pipeline] retry 00:00:04.185 [Pipeline] { 00:00:04.196 [Pipeline] httpRequest 00:00:04.200 HttpMethod: GET 00:00:04.200 URL: http://10.211.164.101/packages/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:04.200 Sending request to url: http://10.211.164.101/packages/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:04.201 Response Code: HTTP/1.1 200 OK 00:00:04.202 Success: Status code 200 is in the accepted range: 200,404 00:00:04.202 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:17.399 [Pipeline] } 00:00:17.415 [Pipeline] // retry 00:00:17.421 [Pipeline] sh 00:00:17.704 + tar --no-same-owner -xf spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:20.262 [Pipeline] sh 00:00:20.549 + git -C spdk log --oneline -n5 00:00:20.549 92108e0a2 fsdev/aio: add support for null IOs 00:00:20.549 dcdab59d3 lib/reduce: Check return code of read superblock 00:00:20.549 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:00:20.549 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:00:20.549 aa7c3b1e2 bdev/nvme: changed default config to multipath 00:00:20.571 [Pipeline] withCredentials 00:00:20.583 > git --version # timeout=10 00:00:20.598 > git --version # 'git version 2.39.2' 00:00:20.618 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:20.620 [Pipeline] { 00:00:20.628 [Pipeline] retry 00:00:20.630 [Pipeline] { 00:00:20.645 [Pipeline] sh 00:00:20.931 + git ls-remote http://dpdk.org/git/dpdk main 00:00:21.204 [Pipeline] } 00:00:21.221 [Pipeline] // retry 00:00:21.226 [Pipeline] } 00:00:21.242 [Pipeline] // withCredentials 00:00:21.251 [Pipeline] httpRequest 00:00:21.648 [Pipeline] echo 00:00:21.650 Sorcerer 10.211.164.101 is alive 00:00:21.660 [Pipeline] retry 00:00:21.662 [Pipeline] { 00:00:21.676 [Pipeline] httpRequest 00:00:21.681 HttpMethod: GET 00:00:21.682 URL: http://10.211.164.101/packages/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:00:21.682 Sending request to url: http://10.211.164.101/packages/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:00:21.694 Response Code: HTTP/1.1 200 OK 00:00:21.695 Success: Status code 200 is in the accepted range: 200,404 00:00:21.695 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:01:12.208 [Pipeline] } 00:01:12.226 [Pipeline] // retry 00:01:12.234 [Pipeline] sh 00:01:12.522 + tar --no-same-owner -xf dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:01:13.917 [Pipeline] sh 00:01:14.204 + git -C dpdk log --oneline -n5 00:01:14.204 e7bc451c99 trace: disable traces at compilation 00:01:14.204 dbdf3d5581 timer: override CPU TSC frequency with OS value 00:01:14.204 7268f21aa0 timer: improve TSC estimation accuracy 00:01:14.204 8df71650e9 drivers: remove more redundant newline in Marvell drivers 00:01:14.204 41b09d64e3 eal/x86: fix 32-bit write combining store 00:01:14.225 [Pipeline] writeFile 00:01:14.241 [Pipeline] sh 00:01:14.531 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.544 [Pipeline] sh 00:01:14.829 + cat autorun-spdk.conf 00:01:14.830 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.830 SPDK_RUN_ASAN=1 00:01:14.830 SPDK_RUN_UBSAN=1 00:01:14.830 SPDK_TEST_RAID=1 00:01:14.830 SPDK_TEST_NATIVE_DPDK=main 00:01:14.830 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:14.830 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.838 RUN_NIGHTLY=1 00:01:14.840 [Pipeline] } 00:01:14.855 [Pipeline] // stage 00:01:14.873 [Pipeline] stage 00:01:14.875 [Pipeline] { (Run VM) 00:01:14.889 [Pipeline] sh 00:01:15.176 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:15.176 + echo 'Start stage prepare_nvme.sh' 00:01:15.176 Start stage prepare_nvme.sh 00:01:15.176 + [[ -n 0 ]] 00:01:15.176 + disk_prefix=ex0 00:01:15.176 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:15.176 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:15.176 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:15.176 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.176 ++ SPDK_RUN_ASAN=1 00:01:15.176 ++ SPDK_RUN_UBSAN=1 00:01:15.176 ++ SPDK_TEST_RAID=1 00:01:15.176 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:15.176 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:15.176 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.176 ++ RUN_NIGHTLY=1 00:01:15.176 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:15.176 + nvme_files=() 00:01:15.176 + declare -A nvme_files 00:01:15.176 + backend_dir=/var/lib/libvirt/images/backends 00:01:15.176 + nvme_files['nvme.img']=5G 00:01:15.176 + nvme_files['nvme-cmb.img']=5G 00:01:15.176 + nvme_files['nvme-multi0.img']=4G 00:01:15.176 + nvme_files['nvme-multi1.img']=4G 00:01:15.176 + nvme_files['nvme-multi2.img']=4G 00:01:15.176 + nvme_files['nvme-openstack.img']=8G 00:01:15.176 + nvme_files['nvme-zns.img']=5G 00:01:15.176 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:15.176 + (( SPDK_TEST_FTL == 1 )) 00:01:15.176 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:15.176 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:15.176 + for nvme in "${!nvme_files[@]}" 00:01:15.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:15.176 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.176 + for nvme in "${!nvme_files[@]}" 00:01:15.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:15.176 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.176 + for nvme in "${!nvme_files[@]}" 00:01:15.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:15.176 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:15.176 + for nvme in "${!nvme_files[@]}" 00:01:15.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:15.176 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.176 + for nvme in "${!nvme_files[@]}" 00:01:15.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:15.176 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.176 + for nvme in "${!nvme_files[@]}" 00:01:15.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:15.176 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.176 + for nvme in "${!nvme_files[@]}" 00:01:15.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:15.436 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.436 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:15.436 + echo 'End stage prepare_nvme.sh' 00:01:15.436 End stage prepare_nvme.sh 00:01:15.449 [Pipeline] sh 00:01:15.736 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:15.736 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:15.736 00:01:15.736 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:15.736 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:15.736 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:15.736 HELP=0 00:01:15.736 DRY_RUN=0 00:01:15.736 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:15.736 NVME_DISKS_TYPE=nvme,nvme, 00:01:15.736 NVME_AUTO_CREATE=0 00:01:15.736 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:15.736 NVME_CMB=,, 00:01:15.736 NVME_PMR=,, 00:01:15.736 NVME_ZNS=,, 00:01:15.736 NVME_MS=,, 00:01:15.736 NVME_FDP=,, 00:01:15.736 SPDK_VAGRANT_DISTRO=fedora39 00:01:15.736 SPDK_VAGRANT_VMCPU=10 00:01:15.736 SPDK_VAGRANT_VMRAM=12288 00:01:15.736 SPDK_VAGRANT_PROVIDER=libvirt 00:01:15.736 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:15.736 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:15.736 SPDK_OPENSTACK_NETWORK=0 00:01:15.736 VAGRANT_PACKAGE_BOX=0 00:01:15.736 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:15.736 FORCE_DISTRO=true 00:01:15.736 VAGRANT_BOX_VERSION= 00:01:15.736 EXTRA_VAGRANTFILES= 00:01:15.736 NIC_MODEL=virtio 00:01:15.736 00:01:15.736 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:15.736 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:17.648 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.908 ==> default: Creating image (snapshot of base box volume). 00:01:18.168 ==> default: Creating domain with the following settings... 00:01:18.168 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728436876_34187a130eae76d6a510 00:01:18.168 ==> default: -- Domain type: kvm 00:01:18.168 ==> default: -- Cpus: 10 00:01:18.168 ==> default: -- Feature: acpi 00:01:18.168 ==> default: -- Feature: apic 00:01:18.168 ==> default: -- Feature: pae 00:01:18.168 ==> default: -- Memory: 12288M 00:01:18.168 ==> default: -- Memory Backing: hugepages: 00:01:18.168 ==> default: -- Management MAC: 00:01:18.168 ==> default: -- Loader: 00:01:18.168 ==> default: -- Nvram: 00:01:18.168 ==> default: -- Base box: spdk/fedora39 00:01:18.168 ==> default: -- Storage pool: default 00:01:18.168 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728436876_34187a130eae76d6a510.img (20G) 00:01:18.168 ==> default: -- Volume Cache: default 00:01:18.168 ==> default: -- Kernel: 00:01:18.168 ==> default: -- Initrd: 00:01:18.168 ==> default: -- Graphics Type: vnc 00:01:18.168 ==> default: -- Graphics Port: -1 00:01:18.168 ==> default: -- Graphics IP: 127.0.0.1 00:01:18.168 ==> default: -- Graphics Password: Not defined 00:01:18.168 ==> default: -- Video Type: cirrus 00:01:18.168 ==> default: -- Video VRAM: 9216 00:01:18.168 ==> default: -- Sound Type: 00:01:18.168 ==> default: -- Keymap: en-us 00:01:18.168 ==> default: -- TPM Path: 00:01:18.168 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:18.168 ==> default: -- Command line args: 00:01:18.168 ==> default: -> value=-device, 00:01:18.168 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:18.168 ==> default: -> value=-drive, 00:01:18.168 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:18.168 ==> default: -> value=-device, 00:01:18.168 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.168 ==> default: -> value=-device, 00:01:18.168 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:18.168 ==> default: -> value=-drive, 00:01:18.168 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:18.168 ==> default: -> value=-device, 00:01:18.168 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.168 ==> default: -> value=-drive, 00:01:18.168 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:18.168 ==> default: -> value=-device, 00:01:18.168 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.168 ==> default: -> value=-drive, 00:01:18.168 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:18.168 ==> default: -> value=-device, 00:01:18.168 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.168 ==> default: Creating shared folders metadata... 00:01:18.168 ==> default: Starting domain. 00:01:20.077 ==> default: Waiting for domain to get an IP address... 00:01:38.206 ==> default: Waiting for SSH to become available... 00:01:38.206 ==> default: Configuring and enabling network interfaces... 00:01:43.489 default: SSH address: 192.168.121.118:22 00:01:43.489 default: SSH username: vagrant 00:01:43.489 default: SSH auth method: private key 00:01:45.400 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:53.530 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:00.132 ==> default: Mounting SSHFS shared folder... 00:02:01.511 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:01.511 ==> default: Checking Mount.. 00:02:03.418 ==> default: Folder Successfully Mounted! 00:02:03.418 ==> default: Running provisioner: file... 00:02:04.356 default: ~/.gitconfig => .gitconfig 00:02:04.925 00:02:04.925 SUCCESS! 00:02:04.925 00:02:04.925 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:04.925 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:04.925 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:04.925 00:02:04.934 [Pipeline] } 00:02:04.947 [Pipeline] // stage 00:02:04.955 [Pipeline] dir 00:02:04.956 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:04.957 [Pipeline] { 00:02:04.969 [Pipeline] catchError 00:02:04.970 [Pipeline] { 00:02:04.982 [Pipeline] sh 00:02:05.267 + vagrant ssh-config --host vagrant 00:02:05.268 + sed -ne /^Host/,$p 00:02:05.268 + tee ssh_conf 00:02:07.805 Host vagrant 00:02:07.805 HostName 192.168.121.118 00:02:07.805 User vagrant 00:02:07.805 Port 22 00:02:07.805 UserKnownHostsFile /dev/null 00:02:07.805 StrictHostKeyChecking no 00:02:07.805 PasswordAuthentication no 00:02:07.805 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:07.805 IdentitiesOnly yes 00:02:07.805 LogLevel FATAL 00:02:07.805 ForwardAgent yes 00:02:07.805 ForwardX11 yes 00:02:07.805 00:02:07.818 [Pipeline] withEnv 00:02:07.820 [Pipeline] { 00:02:07.833 [Pipeline] sh 00:02:08.115 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:08.115 source /etc/os-release 00:02:08.115 [[ -e /image.version ]] && img=$(< /image.version) 00:02:08.115 # Minimal, systemd-like check. 00:02:08.115 if [[ -e /.dockerenv ]]; then 00:02:08.115 # Clear garbage from the node's name: 00:02:08.115 # agt-er_autotest_547-896 -> autotest_547-896 00:02:08.115 # $HOSTNAME is the actual container id 00:02:08.115 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:08.115 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:08.115 # We can assume this is a mount from a host where container is running, 00:02:08.115 # so fetch its hostname to easily identify the target swarm worker. 00:02:08.115 container="$(< /etc/hostname) ($agent)" 00:02:08.115 else 00:02:08.115 # Fallback 00:02:08.115 container=$agent 00:02:08.115 fi 00:02:08.115 fi 00:02:08.115 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:08.115 00:02:08.387 [Pipeline] } 00:02:08.403 [Pipeline] // withEnv 00:02:08.411 [Pipeline] setCustomBuildProperty 00:02:08.426 [Pipeline] stage 00:02:08.428 [Pipeline] { (Tests) 00:02:08.446 [Pipeline] sh 00:02:08.730 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:09.005 [Pipeline] sh 00:02:09.288 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:09.563 [Pipeline] timeout 00:02:09.563 Timeout set to expire in 1 hr 30 min 00:02:09.565 [Pipeline] { 00:02:09.578 [Pipeline] sh 00:02:09.860 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:10.429 HEAD is now at 92108e0a2 fsdev/aio: add support for null IOs 00:02:10.441 [Pipeline] sh 00:02:10.724 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:10.997 [Pipeline] sh 00:02:11.280 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:11.558 [Pipeline] sh 00:02:11.842 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:12.102 ++ readlink -f spdk_repo 00:02:12.102 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:12.102 + [[ -n /home/vagrant/spdk_repo ]] 00:02:12.102 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:12.102 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:12.102 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:12.102 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:12.102 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:12.102 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:12.102 + cd /home/vagrant/spdk_repo 00:02:12.102 + source /etc/os-release 00:02:12.102 ++ NAME='Fedora Linux' 00:02:12.102 ++ VERSION='39 (Cloud Edition)' 00:02:12.102 ++ ID=fedora 00:02:12.102 ++ VERSION_ID=39 00:02:12.102 ++ VERSION_CODENAME= 00:02:12.102 ++ PLATFORM_ID=platform:f39 00:02:12.102 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:12.102 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:12.102 ++ LOGO=fedora-logo-icon 00:02:12.102 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:12.102 ++ HOME_URL=https://fedoraproject.org/ 00:02:12.102 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:12.102 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:12.102 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:12.102 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:12.102 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:12.102 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:12.102 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:12.102 ++ SUPPORT_END=2024-11-12 00:02:12.102 ++ VARIANT='Cloud Edition' 00:02:12.102 ++ VARIANT_ID=cloud 00:02:12.102 + uname -a 00:02:12.102 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:12.102 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:12.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:12.672 Hugepages 00:02:12.672 node hugesize free / total 00:02:12.672 node0 1048576kB 0 / 0 00:02:12.672 node0 2048kB 0 / 0 00:02:12.672 00:02:12.672 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:12.672 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:12.672 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:12.672 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:12.672 + rm -f /tmp/spdk-ld-path 00:02:12.672 + source autorun-spdk.conf 00:02:12.672 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.672 ++ SPDK_RUN_ASAN=1 00:02:12.672 ++ SPDK_RUN_UBSAN=1 00:02:12.672 ++ SPDK_TEST_RAID=1 00:02:12.672 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:12.672 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:12.672 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.672 ++ RUN_NIGHTLY=1 00:02:12.672 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:12.672 + [[ -n '' ]] 00:02:12.672 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:12.932 + for M in /var/spdk/build-*-manifest.txt 00:02:12.932 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:12.932 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.932 + for M in /var/spdk/build-*-manifest.txt 00:02:12.932 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:12.932 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.932 + for M in /var/spdk/build-*-manifest.txt 00:02:12.932 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:12.932 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.932 ++ uname 00:02:12.932 + [[ Linux == \L\i\n\u\x ]] 00:02:12.932 + sudo dmesg -T 00:02:12.932 + sudo dmesg --clear 00:02:12.932 + dmesg_pid=6159 00:02:12.932 + [[ Fedora Linux == FreeBSD ]] 00:02:12.932 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.932 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.932 + sudo dmesg -Tw 00:02:12.932 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:12.932 + [[ -x /usr/src/fio-static/fio ]] 00:02:12.932 + export FIO_BIN=/usr/src/fio-static/fio 00:02:12.932 + FIO_BIN=/usr/src/fio-static/fio 00:02:12.932 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:12.932 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:12.932 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:12.932 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.932 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.932 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:12.932 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.932 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.932 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:12.932 Test configuration: 00:02:12.932 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.932 SPDK_RUN_ASAN=1 00:02:12.932 SPDK_RUN_UBSAN=1 00:02:12.932 SPDK_TEST_RAID=1 00:02:12.932 SPDK_TEST_NATIVE_DPDK=main 00:02:12.932 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:12.932 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.932 RUN_NIGHTLY=1 01:22:11 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:12.932 01:22:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:12.932 01:22:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:12.932 01:22:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:12.932 01:22:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.932 01:22:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.932 01:22:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.932 01:22:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.932 01:22:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.932 01:22:11 -- paths/export.sh@5 -- $ export PATH 00:02:12.932 01:22:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.932 01:22:11 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:12.932 01:22:11 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:12.932 01:22:11 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728436931.XXXXXX 00:02:12.932 01:22:11 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728436931.neNRAA 00:02:12.932 01:22:11 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:12.932 01:22:11 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:02:12.932 01:22:11 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:12.932 01:22:11 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:12.932 01:22:11 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:12.932 01:22:11 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:12.932 01:22:11 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:12.932 01:22:11 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:12.932 01:22:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.932 01:22:11 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:12.932 01:22:11 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:12.932 01:22:11 -- pm/common@17 -- $ local monitor 00:02:12.932 01:22:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.932 01:22:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.932 01:22:11 -- pm/common@25 -- $ sleep 1 00:02:12.932 01:22:11 -- pm/common@21 -- $ date +%s 00:02:12.932 01:22:11 -- pm/common@21 -- $ date +%s 00:02:13.192 01:22:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728436931 00:02:13.192 01:22:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728436931 00:02:13.192 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728436931_collect-vmstat.pm.log 00:02:13.192 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728436931_collect-cpu-load.pm.log 00:02:14.132 01:22:12 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:14.132 01:22:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.132 01:22:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.132 01:22:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:14.132 01:22:12 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.132 Wed Oct 9 01:22:12 AM UTC 2024 00:02:14.132 01:22:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.132 v25.01-pre-41-g92108e0a2 00:02:14.132 01:22:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:14.132 01:22:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:14.132 01:22:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:14.132 01:22:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:14.132 01:22:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.132 ************************************ 00:02:14.132 START TEST asan 00:02:14.132 ************************************ 00:02:14.132 using asan 00:02:14.132 01:22:12 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:14.132 00:02:14.132 real 0m0.000s 00:02:14.132 user 0m0.000s 00:02:14.132 sys 0m0.000s 00:02:14.132 01:22:12 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:14.132 01:22:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:14.132 ************************************ 00:02:14.132 END TEST asan 00:02:14.132 ************************************ 00:02:14.132 01:22:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:14.132 01:22:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:14.132 01:22:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:14.132 01:22:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:14.132 01:22:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.132 ************************************ 00:02:14.132 START TEST ubsan 00:02:14.132 ************************************ 00:02:14.132 using ubsan 00:02:14.132 01:22:12 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:14.132 00:02:14.132 real 0m0.000s 00:02:14.132 user 0m0.000s 00:02:14.132 sys 0m0.000s 00:02:14.132 01:22:12 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:14.132 01:22:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:14.132 ************************************ 00:02:14.132 END TEST ubsan 00:02:14.132 ************************************ 00:02:14.132 01:22:12 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:14.132 01:22:12 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:14.132 01:22:12 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:14.132 01:22:12 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:14.132 01:22:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:14.132 01:22:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.132 ************************************ 00:02:14.132 START TEST build_native_dpdk 00:02:14.132 ************************************ 00:02:14.133 01:22:13 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:14.133 01:22:13 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:14.393 e7bc451c99 trace: disable traces at compilation 00:02:14.393 dbdf3d5581 timer: override CPU TSC frequency with OS value 00:02:14.393 7268f21aa0 timer: improve TSC estimation accuracy 00:02:14.393 8df71650e9 drivers: remove more redundant newline in Marvell drivers 00:02:14.393 41b09d64e3 eal/x86: fix 32-bit write combining store 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc0 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:14.393 01:22:13 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc0 21.11.0 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 21.11.0 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:14.393 01:22:13 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:14.394 patching file config/rte_config.h 00:02:14.394 Hunk #1 succeeded at 71 (offset 12 lines). 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc0 24.07.0 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 24.07.0 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc0 24.07.0 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc0 '>=' 24.07.0 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:14.394 01:22:13 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:14.394 patching file drivers/bus/pci/linux/pci_uio.c 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:14.394 01:22:13 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.686 The Meson build system 00:02:19.686 Version: 1.5.0 00:02:19.686 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:19.686 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:19.686 Build type: native build 00:02:19.686 Program cat found: YES (/usr/bin/cat) 00:02:19.686 Project name: DPDK 00:02:19.686 Project version: 24.11.0-rc0 00:02:19.686 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:19.686 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:19.686 Host machine cpu family: x86_64 00:02:19.686 Host machine cpu: x86_64 00:02:19.686 Message: ## Building in Developer Mode ## 00:02:19.686 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.686 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:19.686 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.686 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:19.686 Program cat found: YES (/usr/bin/cat) 00:02:19.686 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:19.686 Compiler for C supports arguments -march=native: YES 00:02:19.686 Checking for size of "void *" : 8 00:02:19.686 Checking for size of "void *" : 8 (cached) 00:02:19.686 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:19.686 Library m found: YES 00:02:19.686 Library numa found: YES 00:02:19.686 Has header "numaif.h" : YES 00:02:19.686 Library fdt found: NO 00:02:19.686 Library execinfo found: NO 00:02:19.686 Has header "execinfo.h" : YES 00:02:19.686 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:19.686 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.686 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.686 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.686 Run-time dependency openssl found: YES 3.1.1 00:02:19.686 Run-time dependency libpcap found: YES 1.10.4 00:02:19.686 Has header "pcap.h" with dependency libpcap: YES 00:02:19.686 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.686 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.686 Compiler for C supports arguments -Wformat: YES 00:02:19.686 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.686 Compiler for C supports arguments -Wformat-security: NO 00:02:19.686 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.686 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.686 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.686 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.686 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.686 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.686 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.686 Compiler for C supports arguments -Wundef: YES 00:02:19.686 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.686 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.686 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.686 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.686 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.686 Program objdump found: YES (/usr/bin/objdump) 00:02:19.686 Compiler for C supports arguments -mavx512f: YES 00:02:19.686 Checking if "AVX512 checking" compiles: YES 00:02:19.686 Fetching value of define "__SSE4_2__" : 1 00:02:19.686 Fetching value of define "__AES__" : 1 00:02:19.686 Fetching value of define "__AVX__" : 1 00:02:19.686 Fetching value of define "__AVX2__" : 1 00:02:19.686 Fetching value of define "__AVX512BW__" : 1 00:02:19.686 Fetching value of define "__AVX512CD__" : 1 00:02:19.686 Fetching value of define "__AVX512DQ__" : 1 00:02:19.686 Fetching value of define "__AVX512F__" : 1 00:02:19.686 Fetching value of define "__AVX512VL__" : 1 00:02:19.686 Fetching value of define "__PCLMUL__" : 1 00:02:19.686 Fetching value of define "__RDRND__" : 1 00:02:19.686 Fetching value of define "__RDSEED__" : 1 00:02:19.686 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:19.686 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.686 Message: lib/log: Defining dependency "log" 00:02:19.686 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.686 Message: lib/argparse: Defining dependency "argparse" 00:02:19.686 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.686 Checking for function "getentropy" : NO 00:02:19.686 Message: lib/eal: Defining dependency "eal" 00:02:19.686 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:19.686 Message: lib/ring: Defining dependency "ring" 00:02:19.686 Message: lib/rcu: Defining dependency "rcu" 00:02:19.686 Message: lib/mempool: Defining dependency "mempool" 00:02:19.686 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.686 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.686 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.686 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.686 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:19.686 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:19.686 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:19.686 Compiler for C supports arguments -mpclmul: YES 00:02:19.686 Compiler for C supports arguments -maes: YES 00:02:19.686 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.686 Compiler for C supports arguments -mavx512bw: YES 00:02:19.686 Compiler for C supports arguments -mavx512dq: YES 00:02:19.686 Compiler for C supports arguments -mavx512vl: YES 00:02:19.686 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.686 Compiler for C supports arguments -mavx2: YES 00:02:19.686 Compiler for C supports arguments -mavx: YES 00:02:19.686 Message: lib/net: Defining dependency "net" 00:02:19.686 Message: lib/meter: Defining dependency "meter" 00:02:19.686 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.686 Message: lib/pci: Defining dependency "pci" 00:02:19.686 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.686 Message: lib/metrics: Defining dependency "metrics" 00:02:19.686 Message: lib/hash: Defining dependency "hash" 00:02:19.686 Message: lib/timer: Defining dependency "timer" 00:02:19.686 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.686 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:19.686 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:19.686 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.686 Message: lib/acl: Defining dependency "acl" 00:02:19.686 Message: lib/bbdev: Defining dependency "bbdev" 00:02:19.686 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:19.687 Run-time dependency libelf found: YES 0.191 00:02:19.687 Message: lib/bpf: Defining dependency "bpf" 00:02:19.687 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:19.687 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.687 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.687 Message: lib/distributor: Defining dependency "distributor" 00:02:19.687 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.687 Message: lib/efd: Defining dependency "efd" 00:02:19.687 Message: lib/eventdev: Defining dependency "eventdev" 00:02:19.687 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:19.687 Message: lib/gpudev: Defining dependency "gpudev" 00:02:19.687 Message: lib/gro: Defining dependency "gro" 00:02:19.687 Message: lib/gso: Defining dependency "gso" 00:02:19.687 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:19.687 Message: lib/jobstats: Defining dependency "jobstats" 00:02:19.687 Message: lib/latencystats: Defining dependency "latencystats" 00:02:19.687 Message: lib/lpm: Defining dependency "lpm" 00:02:19.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.687 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:19.687 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:19.687 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:19.687 Message: lib/member: Defining dependency "member" 00:02:19.687 Message: lib/pcapng: Defining dependency "pcapng" 00:02:19.687 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.687 Message: lib/power: Defining dependency "power" 00:02:19.687 Message: lib/rawdev: Defining dependency "rawdev" 00:02:19.687 Message: lib/regexdev: Defining dependency "regexdev" 00:02:19.687 Message: lib/mldev: Defining dependency "mldev" 00:02:19.687 Message: lib/rib: Defining dependency "rib" 00:02:19.687 Message: lib/reorder: Defining dependency "reorder" 00:02:19.687 Message: lib/sched: Defining dependency "sched" 00:02:19.687 Message: lib/security: Defining dependency "security" 00:02:19.687 Message: lib/stack: Defining dependency "stack" 00:02:19.687 Has header "linux/userfaultfd.h" : YES 00:02:19.687 Has header "linux/vduse.h" : YES 00:02:19.687 Message: lib/vhost: Defining dependency "vhost" 00:02:19.687 Message: lib/ipsec: Defining dependency "ipsec" 00:02:19.687 Message: lib/pdcp: Defining dependency "pdcp" 00:02:19.687 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.687 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:19.687 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.687 Message: lib/fib: Defining dependency "fib" 00:02:19.687 Message: lib/port: Defining dependency "port" 00:02:19.687 Message: lib/pdump: Defining dependency "pdump" 00:02:19.687 Message: lib/table: Defining dependency "table" 00:02:19.687 Message: lib/pipeline: Defining dependency "pipeline" 00:02:19.687 Message: lib/graph: Defining dependency "graph" 00:02:19.687 Message: lib/node: Defining dependency "node" 00:02:19.687 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.687 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.687 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.595 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.595 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:21.595 Compiler for C supports arguments -Wno-unused-value: YES 00:02:21.595 Compiler for C supports arguments -Wno-format: YES 00:02:21.595 Compiler for C supports arguments -Wno-format-security: YES 00:02:21.595 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:21.595 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.595 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:21.595 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:21.595 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.595 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.595 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.595 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:21.595 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:21.595 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:21.595 Has header "sys/epoll.h" : YES 00:02:21.595 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.595 Configuring doxy-api-html.conf using configuration 00:02:21.595 Configuring doxy-api-man.conf using configuration 00:02:21.595 Program mandb found: YES (/usr/bin/mandb) 00:02:21.595 Program sphinx-build found: NO 00:02:21.595 Configuring rte_build_config.h using configuration 00:02:21.595 Message: 00:02:21.595 ================= 00:02:21.595 Applications Enabled 00:02:21.595 ================= 00:02:21.595 00:02:21.595 apps: 00:02:21.595 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:21.595 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:21.595 test-pmd, test-regex, test-sad, test-security-perf, 00:02:21.595 00:02:21.595 Message: 00:02:21.595 ================= 00:02:21.595 Libraries Enabled 00:02:21.595 ================= 00:02:21.595 00:02:21.595 libs: 00:02:21.595 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:21.595 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:21.595 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:21.595 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:21.595 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:21.595 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:21.595 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:21.595 graph, node, 00:02:21.595 00:02:21.595 Message: 00:02:21.595 =============== 00:02:21.595 Drivers Enabled 00:02:21.595 =============== 00:02:21.595 00:02:21.595 common: 00:02:21.595 00:02:21.595 bus: 00:02:21.595 pci, vdev, 00:02:21.595 mempool: 00:02:21.595 ring, 00:02:21.595 dma: 00:02:21.595 00:02:21.596 net: 00:02:21.596 i40e, 00:02:21.596 raw: 00:02:21.596 00:02:21.596 crypto: 00:02:21.596 00:02:21.596 compress: 00:02:21.596 00:02:21.596 regex: 00:02:21.596 00:02:21.596 ml: 00:02:21.596 00:02:21.596 vdpa: 00:02:21.596 00:02:21.596 event: 00:02:21.596 00:02:21.596 baseband: 00:02:21.596 00:02:21.596 gpu: 00:02:21.596 00:02:21.596 00:02:21.596 Message: 00:02:21.596 ================= 00:02:21.596 Content Skipped 00:02:21.596 ================= 00:02:21.596 00:02:21.596 apps: 00:02:21.596 00:02:21.596 libs: 00:02:21.596 00:02:21.596 drivers: 00:02:21.596 common/cpt: not in enabled drivers build config 00:02:21.596 common/dpaax: not in enabled drivers build config 00:02:21.596 common/iavf: not in enabled drivers build config 00:02:21.596 common/idpf: not in enabled drivers build config 00:02:21.596 common/ionic: not in enabled drivers build config 00:02:21.596 common/mvep: not in enabled drivers build config 00:02:21.596 common/octeontx: not in enabled drivers build config 00:02:21.596 bus/auxiliary: not in enabled drivers build config 00:02:21.596 bus/cdx: not in enabled drivers build config 00:02:21.596 bus/dpaa: not in enabled drivers build config 00:02:21.596 bus/fslmc: not in enabled drivers build config 00:02:21.596 bus/ifpga: not in enabled drivers build config 00:02:21.596 bus/platform: not in enabled drivers build config 00:02:21.596 bus/uacce: not in enabled drivers build config 00:02:21.596 bus/vmbus: not in enabled drivers build config 00:02:21.596 common/cnxk: not in enabled drivers build config 00:02:21.596 common/mlx5: not in enabled drivers build config 00:02:21.596 common/nfp: not in enabled drivers build config 00:02:21.596 common/nitrox: not in enabled drivers build config 00:02:21.596 common/qat: not in enabled drivers build config 00:02:21.596 common/sfc_efx: not in enabled drivers build config 00:02:21.596 mempool/bucket: not in enabled drivers build config 00:02:21.596 mempool/cnxk: not in enabled drivers build config 00:02:21.596 mempool/dpaa: not in enabled drivers build config 00:02:21.596 mempool/dpaa2: not in enabled drivers build config 00:02:21.596 mempool/octeontx: not in enabled drivers build config 00:02:21.596 mempool/stack: not in enabled drivers build config 00:02:21.596 dma/cnxk: not in enabled drivers build config 00:02:21.596 dma/dpaa: not in enabled drivers build config 00:02:21.596 dma/dpaa2: not in enabled drivers build config 00:02:21.596 dma/hisilicon: not in enabled drivers build config 00:02:21.596 dma/idxd: not in enabled drivers build config 00:02:21.596 dma/ioat: not in enabled drivers build config 00:02:21.596 dma/odm: not in enabled drivers build config 00:02:21.596 dma/skeleton: not in enabled drivers build config 00:02:21.596 net/af_packet: not in enabled drivers build config 00:02:21.596 net/af_xdp: not in enabled drivers build config 00:02:21.596 net/ark: not in enabled drivers build config 00:02:21.596 net/atlantic: not in enabled drivers build config 00:02:21.596 net/avp: not in enabled drivers build config 00:02:21.596 net/axgbe: not in enabled drivers build config 00:02:21.596 net/bnx2x: not in enabled drivers build config 00:02:21.596 net/bnxt: not in enabled drivers build config 00:02:21.596 net/bonding: not in enabled drivers build config 00:02:21.596 net/cnxk: not in enabled drivers build config 00:02:21.596 net/cpfl: not in enabled drivers build config 00:02:21.596 net/cxgbe: not in enabled drivers build config 00:02:21.596 net/dpaa: not in enabled drivers build config 00:02:21.596 net/dpaa2: not in enabled drivers build config 00:02:21.596 net/e1000: not in enabled drivers build config 00:02:21.596 net/ena: not in enabled drivers build config 00:02:21.596 net/enetc: not in enabled drivers build config 00:02:21.596 net/enetfec: not in enabled drivers build config 00:02:21.596 net/enic: not in enabled drivers build config 00:02:21.596 net/failsafe: not in enabled drivers build config 00:02:21.596 net/fm10k: not in enabled drivers build config 00:02:21.596 net/gve: not in enabled drivers build config 00:02:21.596 net/hinic: not in enabled drivers build config 00:02:21.596 net/hns3: not in enabled drivers build config 00:02:21.596 net/iavf: not in enabled drivers build config 00:02:21.596 net/ice: not in enabled drivers build config 00:02:21.596 net/idpf: not in enabled drivers build config 00:02:21.596 net/igc: not in enabled drivers build config 00:02:21.596 net/ionic: not in enabled drivers build config 00:02:21.596 net/ipn3ke: not in enabled drivers build config 00:02:21.596 net/ixgbe: not in enabled drivers build config 00:02:21.596 net/mana: not in enabled drivers build config 00:02:21.596 net/memif: not in enabled drivers build config 00:02:21.596 net/mlx4: not in enabled drivers build config 00:02:21.596 net/mlx5: not in enabled drivers build config 00:02:21.596 net/mvneta: not in enabled drivers build config 00:02:21.596 net/mvpp2: not in enabled drivers build config 00:02:21.596 net/netvsc: not in enabled drivers build config 00:02:21.596 net/nfb: not in enabled drivers build config 00:02:21.596 net/nfp: not in enabled drivers build config 00:02:21.596 net/ngbe: not in enabled drivers build config 00:02:21.596 net/ntnic: not in enabled drivers build config 00:02:21.596 net/null: not in enabled drivers build config 00:02:21.596 net/octeontx: not in enabled drivers build config 00:02:21.596 net/octeon_ep: not in enabled drivers build config 00:02:21.596 net/pcap: not in enabled drivers build config 00:02:21.596 net/pfe: not in enabled drivers build config 00:02:21.596 net/qede: not in enabled drivers build config 00:02:21.596 net/ring: not in enabled drivers build config 00:02:21.596 net/sfc: not in enabled drivers build config 00:02:21.596 net/softnic: not in enabled drivers build config 00:02:21.596 net/tap: not in enabled drivers build config 00:02:21.596 net/thunderx: not in enabled drivers build config 00:02:21.596 net/txgbe: not in enabled drivers build config 00:02:21.596 net/vdev_netvsc: not in enabled drivers build config 00:02:21.596 net/vhost: not in enabled drivers build config 00:02:21.596 net/virtio: not in enabled drivers build config 00:02:21.596 net/vmxnet3: not in enabled drivers build config 00:02:21.596 raw/cnxk_bphy: not in enabled drivers build config 00:02:21.596 raw/cnxk_gpio: not in enabled drivers build config 00:02:21.596 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:21.596 raw/ifpga: not in enabled drivers build config 00:02:21.596 raw/ntb: not in enabled drivers build config 00:02:21.596 raw/skeleton: not in enabled drivers build config 00:02:21.596 crypto/armv8: not in enabled drivers build config 00:02:21.596 crypto/bcmfs: not in enabled drivers build config 00:02:21.596 crypto/caam_jr: not in enabled drivers build config 00:02:21.596 crypto/ccp: not in enabled drivers build config 00:02:21.596 crypto/cnxk: not in enabled drivers build config 00:02:21.596 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.596 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.596 crypto/ionic: not in enabled drivers build config 00:02:21.596 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.596 crypto/mlx5: not in enabled drivers build config 00:02:21.596 crypto/mvsam: not in enabled drivers build config 00:02:21.596 crypto/nitrox: not in enabled drivers build config 00:02:21.596 crypto/null: not in enabled drivers build config 00:02:21.596 crypto/octeontx: not in enabled drivers build config 00:02:21.596 crypto/openssl: not in enabled drivers build config 00:02:21.596 crypto/scheduler: not in enabled drivers build config 00:02:21.596 crypto/uadk: not in enabled drivers build config 00:02:21.596 crypto/virtio: not in enabled drivers build config 00:02:21.596 compress/isal: not in enabled drivers build config 00:02:21.596 compress/mlx5: not in enabled drivers build config 00:02:21.596 compress/nitrox: not in enabled drivers build config 00:02:21.596 compress/octeontx: not in enabled drivers build config 00:02:21.596 compress/uadk: not in enabled drivers build config 00:02:21.596 compress/zlib: not in enabled drivers build config 00:02:21.596 regex/mlx5: not in enabled drivers build config 00:02:21.596 regex/cn9k: not in enabled drivers build config 00:02:21.596 ml/cnxk: not in enabled drivers build config 00:02:21.596 vdpa/ifc: not in enabled drivers build config 00:02:21.596 vdpa/mlx5: not in enabled drivers build config 00:02:21.596 vdpa/nfp: not in enabled drivers build config 00:02:21.596 vdpa/sfc: not in enabled drivers build config 00:02:21.596 event/cnxk: not in enabled drivers build config 00:02:21.596 event/dlb2: not in enabled drivers build config 00:02:21.596 event/dpaa: not in enabled drivers build config 00:02:21.596 event/dpaa2: not in enabled drivers build config 00:02:21.596 event/dsw: not in enabled drivers build config 00:02:21.596 event/opdl: not in enabled drivers build config 00:02:21.596 event/skeleton: not in enabled drivers build config 00:02:21.596 event/sw: not in enabled drivers build config 00:02:21.596 event/octeontx: not in enabled drivers build config 00:02:21.596 baseband/acc: not in enabled drivers build config 00:02:21.596 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:21.596 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:21.596 baseband/la12xx: not in enabled drivers build config 00:02:21.596 baseband/null: not in enabled drivers build config 00:02:21.596 baseband/turbo_sw: not in enabled drivers build config 00:02:21.596 gpu/cuda: not in enabled drivers build config 00:02:21.596 00:02:21.596 00:02:21.596 Build targets in project: 221 00:02:21.596 00:02:21.596 DPDK 24.11.0-rc0 00:02:21.596 00:02:21.596 User defined options 00:02:21.596 libdir : lib 00:02:21.596 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:21.596 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:21.596 c_link_args : 00:02:21.596 enable_docs : false 00:02:21.596 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:21.596 enable_kmods : false 00:02:21.596 machine : native 00:02:21.596 tests : false 00:02:21.596 00:02:21.597 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.597 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:21.597 01:22:20 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:21.597 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:21.597 [1/720] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.857 [2/720] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.857 [3/720] Linking static target lib/librte_kvargs.a 00:02:21.857 [4/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.857 [5/720] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.857 [6/720] Linking static target lib/librte_log.a 00:02:21.857 [7/720] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:21.857 [8/720] Linking static target lib/librte_argparse.a 00:02:21.857 [9/720] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.857 [10/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.857 [11/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:21.857 [12/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.857 [13/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.857 [14/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.117 [15/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.117 [16/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.117 [17/720] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.117 [18/720] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.117 [19/720] Linking target lib/librte_log.so.25.0 00:02:22.117 [20/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.376 [21/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.376 [22/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.377 [23/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.377 [24/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.377 [25/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.377 [26/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.377 [27/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.636 [28/720] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:22.636 [29/720] Linking target lib/librte_kvargs.so.25.0 00:02:22.636 [30/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.636 [31/720] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.636 [32/720] Linking static target lib/librte_telemetry.a 00:02:22.636 [33/720] Linking target lib/librte_argparse.so.25.0 00:02:22.636 [34/720] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:22.636 [35/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.636 [36/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.636 [37/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.636 [38/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.895 [39/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.895 [40/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.895 [41/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.895 [42/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.895 [43/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.895 [44/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.895 [45/720] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.895 [46/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.895 [47/720] Linking target lib/librte_telemetry.so.25.0 00:02:23.155 [48/720] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:23.155 [49/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.155 [50/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:23.155 [51/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.155 [52/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:23.155 [53/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.414 [54/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.415 [55/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.415 [56/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.415 [57/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.415 [58/720] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.415 [59/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.415 [60/720] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.673 [61/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.674 [62/720] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.674 [63/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.674 [64/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.674 [65/720] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.674 [66/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.674 [67/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.674 [68/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.674 [69/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.674 [70/720] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.932 [71/720] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.932 [72/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.932 [73/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.932 [74/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.191 [75/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.191 [76/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:24.191 [77/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.191 [78/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.191 [79/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:24.191 [80/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:24.191 [81/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.191 [82/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.451 [83/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:24.451 [84/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:24.451 [85/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.451 [86/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.451 [87/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.451 [88/720] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.451 [89/720] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.451 [90/720] Linking static target lib/librte_ring.a 00:02:24.710 [91/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.710 [92/720] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.710 [93/720] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.710 [94/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.710 [95/720] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.710 [96/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.710 [97/720] Linking static target lib/librte_eal.a 00:02:24.710 [98/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.968 [99/720] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.968 [100/720] Linking static target lib/librte_mempool.a 00:02:24.968 [101/720] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.968 [102/720] Linking static target lib/librte_rcu.a 00:02:25.227 [103/720] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:25.227 [104/720] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:25.227 [105/720] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:25.227 [106/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:25.227 [107/720] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:25.227 [108/720] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:25.227 [109/720] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.227 [110/720] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.486 [111/720] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.486 [112/720] Linking static target lib/librte_mbuf.a 00:02:25.486 [113/720] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:25.486 [114/720] Linking static target lib/librte_net.a 00:02:25.486 [115/720] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:25.486 [116/720] Linking static target lib/librte_meter.a 00:02:25.486 [117/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:25.486 [118/720] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.745 [119/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:25.745 [120/720] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.745 [121/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:25.745 [122/720] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.745 [123/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.004 [124/720] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.004 [125/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.265 [126/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:26.524 [127/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:26.524 [128/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:26.524 [129/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.524 [130/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.524 [131/720] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.524 [132/720] Linking static target lib/librte_pci.a 00:02:26.524 [133/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.524 [134/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:26.524 [135/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.783 [136/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:26.783 [137/720] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.783 [138/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.783 [139/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:26.783 [140/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.783 [141/720] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:26.783 [142/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.783 [143/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.783 [144/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.783 [145/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.783 [146/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.043 [147/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.043 [148/720] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.043 [149/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.043 [150/720] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.043 [151/720] Linking static target lib/librte_cmdline.a 00:02:27.302 [152/720] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:27.302 [153/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:27.302 [154/720] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:27.302 [155/720] Linking static target lib/librte_metrics.a 00:02:27.302 [156/720] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.302 [157/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.561 [158/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:27.561 [159/720] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.561 [160/720] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.820 [161/720] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.820 [162/720] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.820 [163/720] Linking static target lib/librte_timer.a 00:02:28.080 [164/720] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:28.080 [165/720] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.080 [166/720] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:28.080 [167/720] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.080 [168/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:28.667 [169/720] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:28.667 [170/720] Linking static target lib/librte_bitratestats.a 00:02:28.667 [171/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:28.667 [172/720] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.667 [173/720] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:28.667 [174/720] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:28.667 [175/720] Linking static target lib/librte_bbdev.a 00:02:28.933 [176/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:28.933 [177/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:29.193 [178/720] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.193 [179/720] Linking static target lib/librte_hash.a 00:02:29.193 [180/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:29.193 [181/720] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.193 [182/720] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.193 [183/720] Linking static target lib/librte_ethdev.a 00:02:29.453 [184/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:29.453 [185/720] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:29.453 [186/720] Linking static target lib/acl/libavx2_tmp.a 00:02:29.453 [187/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:29.453 [188/720] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.453 [189/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:29.712 [190/720] Linking target lib/librte_eal.so.25.0 00:02:29.712 [191/720] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.712 [192/720] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:29.712 [193/720] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:29.712 [194/720] Linking static target lib/librte_cfgfile.a 00:02:29.712 [195/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:29.712 [196/720] Linking target lib/librte_ring.so.25.0 00:02:29.712 [197/720] Linking target lib/librte_meter.so.25.0 00:02:29.712 [198/720] Linking target lib/librte_pci.so.25.0 00:02:29.972 [199/720] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:29.972 [200/720] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:29.972 [201/720] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:29.972 [202/720] Linking target lib/librte_rcu.so.25.0 00:02:29.972 [203/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:29.972 [204/720] Linking target lib/librte_timer.so.25.0 00:02:29.972 [205/720] Linking target lib/librte_mempool.so.25.0 00:02:29.972 [206/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:29.972 [207/720] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:29.972 [208/720] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:29.972 [209/720] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:29.972 [210/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:29.972 [211/720] Linking target lib/librte_mbuf.so.25.0 00:02:30.232 [212/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.232 [213/720] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:30.232 [214/720] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.232 [215/720] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:30.232 [216/720] Linking target lib/librte_net.so.25.0 00:02:30.232 [217/720] Linking static target lib/librte_bpf.a 00:02:30.232 [218/720] Linking target lib/librte_bbdev.so.25.0 00:02:30.232 [219/720] Linking target lib/librte_cfgfile.so.25.0 00:02:30.232 [220/720] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:30.232 [221/720] Linking target lib/librte_cmdline.so.25.0 00:02:30.491 [222/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.491 [223/720] Linking target lib/librte_hash.so.25.0 00:02:30.491 [224/720] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:30.491 [225/720] Linking static target lib/librte_acl.a 00:02:30.491 [226/720] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.491 [227/720] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.491 [228/720] Linking static target lib/librte_compressdev.a 00:02:30.491 [229/720] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:30.491 [230/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.751 [231/720] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.751 [232/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:30.751 [233/720] Linking target lib/librte_acl.so.25.0 00:02:30.751 [234/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:30.751 [235/720] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:30.751 [236/720] Linking static target lib/librte_distributor.a 00:02:30.751 [237/720] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:30.751 [238/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:31.010 [239/720] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.010 [240/720] Linking target lib/librte_compressdev.so.25.0 00:02:31.010 [241/720] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.010 [242/720] Linking target lib/librte_distributor.so.25.0 00:02:31.010 [243/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:31.270 [244/720] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:31.270 [245/720] Linking static target lib/librte_dmadev.a 00:02:31.270 [246/720] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:31.529 [247/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:31.529 [248/720] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.529 [249/720] Linking target lib/librte_dmadev.so.25.0 00:02:31.529 [250/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:31.529 [251/720] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:31.833 [252/720] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:31.833 [253/720] Linking static target lib/librte_efd.a 00:02:31.833 [254/720] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.833 [255/720] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:31.833 [256/720] Linking static target lib/librte_cryptodev.a 00:02:31.833 [257/720] Linking target lib/librte_efd.so.25.0 00:02:31.833 [258/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:32.092 [259/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:32.092 [260/720] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:32.092 [261/720] Linking static target lib/librte_dispatcher.a 00:02:32.352 [262/720] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:32.352 [263/720] Linking static target lib/librte_gpudev.a 00:02:32.352 [264/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:32.352 [265/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:32.352 [266/720] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:32.611 [267/720] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.611 [268/720] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:32.870 [269/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:32.870 [270/720] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:32.870 [271/720] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.870 [272/720] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.870 [273/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:32.870 [274/720] Linking target lib/librte_cryptodev.so.25.0 00:02:32.870 [275/720] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:32.870 [276/720] Linking target lib/librte_gpudev.so.25.0 00:02:32.870 [277/720] Linking static target lib/librte_gro.a 00:02:32.870 [278/720] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:33.130 [279/720] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:02:33.130 [280/720] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.130 [281/720] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:33.130 [282/720] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:33.130 [283/720] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:33.130 [284/720] Linking static target lib/librte_eventdev.a 00:02:33.130 [285/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:33.389 [286/720] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:33.389 [287/720] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:33.389 [288/720] Linking static target lib/librte_gso.a 00:02:33.389 [289/720] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.389 [290/720] Linking target lib/librte_ethdev.so.25.0 00:02:33.389 [291/720] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.649 [292/720] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:02:33.649 [293/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:33.649 [294/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:33.649 [295/720] Linking target lib/librte_metrics.so.25.0 00:02:33.649 [296/720] Linking target lib/librte_bpf.so.25.0 00:02:33.649 [297/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:33.649 [298/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:33.649 [299/720] Linking target lib/librte_gro.so.25.0 00:02:33.649 [300/720] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:33.649 [301/720] Linking target lib/librte_gso.so.25.0 00:02:33.649 [302/720] Linking static target lib/librte_jobstats.a 00:02:33.649 [303/720] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:02:33.649 [304/720] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:02:33.649 [305/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:33.649 [306/720] Linking target lib/librte_bitratestats.so.25.0 00:02:33.649 [307/720] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:33.908 [308/720] Linking static target lib/librte_ip_frag.a 00:02:33.908 [309/720] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.908 [310/720] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:33.908 [311/720] Linking static target lib/librte_latencystats.a 00:02:33.908 [312/720] Linking target lib/librte_jobstats.so.25.0 00:02:33.908 [313/720] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.168 [314/720] Linking target lib/librte_ip_frag.so.25.0 00:02:34.168 [315/720] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:34.168 [316/720] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:34.168 [317/720] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:34.168 [318/720] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.168 [319/720] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:02:34.168 [320/720] Linking target lib/librte_latencystats.so.25.0 00:02:34.168 [321/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:34.168 [322/720] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.168 [323/720] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.427 [324/720] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.427 [325/720] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:34.427 [326/720] Linking static target lib/librte_lpm.a 00:02:34.687 [327/720] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:34.687 [328/720] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:34.687 [329/720] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:34.687 [330/720] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.687 [331/720] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:34.687 [332/720] Linking static target lib/librte_pcapng.a 00:02:34.687 [333/720] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:34.687 [334/720] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.687 [335/720] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:34.687 [336/720] Linking target lib/librte_lpm.so.25.0 00:02:34.948 [337/720] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.948 [338/720] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:02:34.948 [339/720] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.948 [340/720] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:34.948 [341/720] Linking target lib/librte_pcapng.so.25.0 00:02:34.948 [342/720] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.948 [343/720] Linking target lib/librte_eventdev.so.25.0 00:02:35.208 [344/720] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:02:35.208 [345/720] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:02:35.208 [346/720] Linking target lib/librte_dispatcher.so.25.0 00:02:35.208 [347/720] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:35.208 [348/720] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.208 [349/720] Linking static target lib/librte_power.a 00:02:35.208 [350/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:35.208 [351/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:35.208 [352/720] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:35.208 [353/720] Linking static target lib/librte_regexdev.a 00:02:35.208 [354/720] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:35.208 [355/720] Linking static target lib/librte_rawdev.a 00:02:35.468 [356/720] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:35.468 [357/720] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:35.468 [358/720] Linking static target lib/librte_member.a 00:02:35.468 [359/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:35.728 [360/720] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:35.728 [361/720] Linking static target lib/librte_mldev.a 00:02:35.728 [362/720] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.728 [363/720] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.728 [364/720] Linking target lib/librte_rawdev.so.25.0 00:02:35.728 [365/720] Linking target lib/librte_power.so.25.0 00:02:35.728 [366/720] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.728 [367/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:35.728 [368/720] Linking target lib/librte_member.so.25.0 00:02:35.988 [369/720] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.988 [370/720] Linking static target lib/librte_reorder.a 00:02:35.988 [371/720] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:35.988 [372/720] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:35.988 [373/720] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.988 [374/720] Linking target lib/librte_regexdev.so.25.0 00:02:35.988 [375/720] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:35.988 [376/720] Linking static target lib/librte_rib.a 00:02:35.988 [377/720] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:36.248 [378/720] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.248 [379/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:36.248 [380/720] Linking target lib/librte_reorder.so.25.0 00:02:36.248 [381/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:36.248 [382/720] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:36.248 [383/720] Linking static target lib/librte_stack.a 00:02:36.248 [384/720] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:02:36.248 [385/720] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.507 [386/720] Linking static target lib/librte_security.a 00:02:36.507 [387/720] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.507 [388/720] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:36.507 [389/720] Linking target lib/librte_rib.so.25.0 00:02:36.507 [390/720] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.507 [391/720] Linking target lib/librte_stack.so.25.0 00:02:36.507 [392/720] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:02:36.767 [393/720] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.767 [394/720] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.767 [395/720] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.767 [396/720] Linking target lib/librte_security.so.25.0 00:02:36.767 [397/720] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:37.027 [398/720] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:02:37.027 [399/720] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.027 [400/720] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:37.027 [401/720] Linking target lib/librte_mldev.so.25.0 00:02:37.027 [402/720] Linking static target lib/librte_sched.a 00:02:37.286 [403/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:37.286 [404/720] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:37.286 [405/720] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.286 [406/720] Linking target lib/librte_sched.so.25.0 00:02:37.545 [407/720] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:02:37.545 [408/720] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:37.545 [409/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:37.545 [410/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:37.805 [411/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:37.805 [412/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:37.805 [413/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:37.805 [414/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:38.064 [415/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:38.065 [416/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:38.065 [417/720] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:38.324 [418/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:38.324 [419/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:38.324 [420/720] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:38.324 [421/720] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:38.324 [422/720] Linking static target lib/librte_ipsec.a 00:02:38.324 [423/720] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:38.324 [424/720] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:38.595 [425/720] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.595 [426/720] Linking target lib/librte_ipsec.so.25.0 00:02:38.595 [427/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:38.595 [428/720] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:02:38.595 [429/720] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:38.906 [430/720] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:38.906 [431/720] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:38.906 [432/720] Linking static target lib/librte_fib.a 00:02:38.906 [433/720] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:39.180 [434/720] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:39.180 [435/720] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:39.180 [436/720] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:39.180 [437/720] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:39.180 [438/720] Linking static target lib/librte_pdcp.a 00:02:39.180 [439/720] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.180 [440/720] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:39.180 [441/720] Linking target lib/librte_fib.so.25.0 00:02:39.440 [442/720] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.440 [443/720] Linking target lib/librte_pdcp.so.25.0 00:02:39.699 [444/720] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:39.699 [445/720] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:39.699 [446/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:39.958 [447/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:39.958 [448/720] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:39.958 [449/720] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:40.218 [450/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:40.218 [451/720] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:40.218 [452/720] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:40.218 [453/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:40.218 [454/720] Linking static target lib/librte_port.a 00:02:40.218 [455/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:40.218 [456/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:40.478 [457/720] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:40.478 [458/720] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:40.478 [459/720] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:40.478 [460/720] Linking static target lib/librte_pdump.a 00:02:40.737 [461/720] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:40.737 [462/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:40.737 [463/720] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.737 [464/720] Linking target lib/librte_port.so.25.0 00:02:40.737 [465/720] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.737 [466/720] Linking target lib/librte_pdump.so.25.0 00:02:40.997 [467/720] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:02:40.997 [468/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:41.256 [469/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:41.256 [470/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:41.256 [471/720] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:41.256 [472/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:41.256 [473/720] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:41.256 [474/720] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:41.516 [475/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:41.516 [476/720] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:41.516 [477/720] Linking static target lib/librte_table.a 00:02:41.516 [478/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:41.775 [479/720] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.775 [480/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:42.034 [481/720] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:42.034 [482/720] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.034 [483/720] Linking target lib/librte_table.so.25.0 00:02:42.292 [484/720] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:02:42.292 [485/720] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:42.292 [486/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:42.292 [487/720] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:42.292 [488/720] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:42.552 [489/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:42.552 [490/720] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:42.552 [491/720] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:42.811 [492/720] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:42.811 [493/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:43.069 [494/720] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:43.069 [495/720] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:43.069 [496/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:43.069 [497/720] Linking static target lib/librte_graph.a 00:02:43.069 [498/720] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:43.069 [499/720] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:43.328 [500/720] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:43.588 [501/720] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:43.588 [502/720] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.588 [503/720] Linking target lib/librte_graph.so.25.0 00:02:43.588 [504/720] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:43.588 [505/720] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:02:43.588 [506/720] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:43.847 [507/720] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:43.847 [508/720] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:43.847 [509/720] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:43.847 [510/720] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:44.107 [511/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.107 [512/720] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:44.107 [513/720] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:44.107 [514/720] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:44.366 [515/720] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:44.366 [516/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.366 [517/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.366 [518/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.366 [519/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.625 [520/720] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:44.625 [521/720] Linking static target lib/librte_node.a 00:02:44.625 [522/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.625 [523/720] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.625 [524/720] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.884 [525/720] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.884 [526/720] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.884 [527/720] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.884 [528/720] Linking target lib/librte_node.so.25.0 00:02:44.884 [529/720] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.884 [530/720] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.884 [531/720] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.884 [532/720] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.884 [533/720] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.884 [534/720] Linking static target drivers/librte_bus_vdev.a 00:02:44.884 [535/720] Linking static target drivers/librte_bus_pci.a 00:02:45.144 [536/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:45.144 [537/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:45.144 [538/720] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.144 [539/720] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.144 [540/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:45.144 [541/720] Linking target drivers/librte_bus_vdev.so.25.0 00:02:45.144 [542/720] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.404 [543/720] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.404 [544/720] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:02:45.404 [545/720] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.404 [546/720] Linking target drivers/librte_bus_pci.so.25.0 00:02:45.404 [547/720] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.404 [548/720] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.404 [549/720] Linking static target drivers/librte_mempool_ring.a 00:02:45.404 [550/720] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.404 [551/720] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:02:45.404 [552/720] Linking target drivers/librte_mempool_ring.so.25.0 00:02:45.404 [553/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:45.664 [554/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:45.923 [555/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:46.183 [556/720] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:46.183 [557/720] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:46.751 [558/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:46.751 [559/720] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:46.751 [560/720] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:47.010 [561/720] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:47.010 [562/720] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:47.010 [563/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:47.010 [564/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:47.269 [565/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:47.269 [566/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:47.528 [567/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:47.528 [568/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:47.528 [569/720] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:47.528 [570/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:48.097 [571/720] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:48.097 [572/720] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:48.097 [573/720] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:48.356 [574/720] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:48.356 [575/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:48.356 [576/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:48.356 [577/720] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:48.615 [578/720] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:48.615 [579/720] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:48.615 [580/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:48.615 [581/720] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:48.875 [582/720] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:48.875 [583/720] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:48.875 [584/720] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:48.875 [585/720] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:48.875 [586/720] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:48.875 [587/720] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:49.135 [588/720] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:49.135 [589/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:49.135 [590/720] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:49.135 [591/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:49.395 [592/720] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:49.395 [593/720] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:49.395 [594/720] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:49.654 [595/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:49.654 [596/720] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:49.654 [597/720] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:49.654 [598/720] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:49.654 [599/720] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:49.654 [600/720] Linking static target drivers/librte_net_i40e.a 00:02:49.654 [601/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:49.913 [602/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:50.173 [603/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:50.173 [604/720] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.173 [605/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:50.173 [606/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:50.173 [607/720] Linking target drivers/librte_net_i40e.so.25.0 00:02:50.173 [608/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:50.450 [609/720] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:50.450 [610/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:50.450 [611/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:50.738 [612/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:50.997 [613/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:50.997 [614/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:50.997 [615/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:50.997 [616/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:50.997 [617/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:50.997 [618/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:51.256 [619/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:51.256 [620/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:51.256 [621/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:51.256 [622/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:51.518 [623/720] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:51.518 [624/720] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.518 [625/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:51.518 [626/720] Linking static target lib/librte_vhost.a 00:02:51.777 [627/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:51.777 [628/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:51.777 [629/720] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:52.036 [630/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:52.295 [631/720] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.295 [632/720] Linking target lib/librte_vhost.so.25.0 00:02:52.555 [633/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:52.555 [634/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:52.555 [635/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:52.555 [636/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:52.555 [637/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:52.555 [638/720] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:52.814 [639/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:52.814 [640/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:52.814 [641/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:52.814 [642/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:53.072 [643/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:53.072 [644/720] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:53.072 [645/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:53.072 [646/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:53.332 [647/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:53.332 [648/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:53.332 [649/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:53.332 [650/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:53.332 [651/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:53.591 [652/720] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:53.591 [653/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:53.591 [654/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:53.591 [655/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:53.850 [656/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:53.850 [657/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:53.850 [658/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:54.110 [659/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:54.110 [660/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:54.110 [661/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:54.110 [662/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:54.110 [663/720] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:54.369 [664/720] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:54.369 [665/720] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:54.369 [666/720] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:54.369 [667/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:54.369 [668/720] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:54.628 [669/720] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:54.628 [670/720] Linking static target lib/librte_pipeline.a 00:02:54.628 [671/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:54.888 [672/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:54.888 [673/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:54.888 [674/720] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:55.147 [675/720] Linking target app/dpdk-dumpcap 00:02:55.147 [676/720] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:55.147 [677/720] Linking target app/dpdk-graph 00:02:55.147 [678/720] Linking target app/dpdk-pdump 00:02:55.407 [679/720] Linking target app/dpdk-proc-info 00:02:55.407 [680/720] Linking target app/dpdk-test-acl 00:02:55.407 [681/720] Linking target app/dpdk-test-bbdev 00:02:55.667 [682/720] Linking target app/dpdk-test-cmdline 00:02:55.667 [683/720] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:55.667 [684/720] Linking target app/dpdk-test-crypto-perf 00:02:55.667 [685/720] Linking target app/dpdk-test-compress-perf 00:02:55.667 [686/720] Linking target app/dpdk-test-dma-perf 00:02:55.927 [687/720] Linking target app/dpdk-test-eventdev 00:02:55.927 [688/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:55.927 [689/720] Linking target app/dpdk-test-fib 00:02:55.927 [690/720] Linking target app/dpdk-test-flow-perf 00:02:55.927 [691/720] Linking target app/dpdk-test-mldev 00:02:55.927 [692/720] Linking target app/dpdk-test-gpudev 00:02:56.187 [693/720] Linking target app/dpdk-test-pipeline 00:02:56.187 [694/720] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:56.481 [695/720] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:56.481 [696/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:56.481 [697/720] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:56.481 [698/720] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:56.741 [699/720] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:56.741 [700/720] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:56.741 [701/720] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:56.741 [702/720] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.741 [703/720] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:56.741 [704/720] Linking target lib/librte_pipeline.so.25.0 00:02:57.000 [705/720] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:57.000 [706/720] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:57.258 [707/720] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:57.258 [708/720] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:57.258 [709/720] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:57.518 [710/720] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:57.518 [711/720] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:57.518 [712/720] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:57.518 [713/720] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:57.777 [714/720] Linking target app/dpdk-test-sad 00:02:57.777 [715/720] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:57.777 [716/720] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:57.777 [717/720] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:58.036 [718/720] Linking target app/dpdk-test-regex 00:02:58.296 [719/720] Linking target app/dpdk-test-security-perf 00:02:58.296 [720/720] Linking target app/dpdk-testpmd 00:02:58.296 01:22:57 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:58.296 01:22:57 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:58.296 01:22:57 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:58.296 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:58.296 [0/1] Installing files. 00:02:58.557 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:58.557 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:58.557 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.560 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.821 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.821 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.821 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.821 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.821 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:58.821 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.822 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.823 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.823 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.823 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.823 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:58.824 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:59.087 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:59.087 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:59.087 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.087 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:59.087 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.087 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.088 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.089 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:59.090 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:59.090 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:02:59.090 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:59.090 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:02:59.090 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:59.090 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:02:59.090 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:02:59.090 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:02:59.090 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:59.090 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:02:59.090 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:59.090 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:02:59.090 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:59.090 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:02:59.090 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:59.090 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:02:59.090 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:59.090 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:02:59.090 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:59.090 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:02:59.090 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:59.090 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:02:59.090 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:59.090 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:02:59.090 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:59.090 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:02:59.090 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:59.090 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:02:59.090 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:59.090 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:02:59.090 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:59.090 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:02:59.090 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:59.090 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:02:59.090 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:59.090 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:02:59.090 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:59.090 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:02:59.090 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:59.090 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:02:59.090 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:59.090 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:02:59.090 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:59.090 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:02:59.090 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:59.090 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:02:59.090 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:59.090 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:02:59.090 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:59.090 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:02:59.090 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:59.090 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:02:59.090 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:59.090 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:02:59.090 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:59.090 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:02:59.090 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:59.090 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:02:59.090 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:59.090 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:02:59.090 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:59.090 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:02:59.090 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:59.090 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:02:59.091 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:59.091 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:02:59.091 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:59.091 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:02:59.091 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:59.091 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:02:59.091 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:59.091 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:02:59.091 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:59.091 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:02:59.091 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:59.091 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:02:59.091 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:02:59.091 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:02:59.091 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:02:59.091 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:02:59.091 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:02:59.091 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:02:59.091 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:02:59.091 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:02:59.091 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:02:59.091 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:02:59.091 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:02:59.091 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:02:59.091 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:59.091 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:02:59.091 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:59.091 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:02:59.091 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:59.091 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:02:59.091 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:59.091 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:02:59.091 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:59.091 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:02:59.091 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:59.091 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:02:59.091 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:59.091 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:02:59.091 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:59.091 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:02:59.091 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:59.091 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:02:59.091 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:59.091 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:02:59.091 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:59.091 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:02:59.091 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:59.091 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:02:59.091 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:59.091 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:02:59.091 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:59.091 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:02:59.091 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:59.091 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:02:59.091 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:59.091 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:02:59.091 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:59.091 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:02:59.091 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:59.091 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:02:59.091 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:59.091 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:02:59.091 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:59.091 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:02:59.091 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:02:59.091 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:02:59.091 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:02:59.091 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:02:59.091 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:02:59.091 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:02:59.091 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:02:59.091 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:02:59.091 01:22:57 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:59.091 ************************************ 00:02:59.091 END TEST build_native_dpdk 00:02:59.091 ************************************ 00:02:59.091 01:22:57 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.091 00:02:59.091 real 0m44.960s 00:02:59.091 user 5m2.261s 00:02:59.091 sys 0m56.259s 00:02:59.091 01:22:57 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:59.091 01:22:57 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:59.351 01:22:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.351 01:22:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.351 01:22:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.351 01:22:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.351 01:22:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.351 01:22:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.351 01:22:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.351 01:22:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:59.351 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:59.610 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.610 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:59.610 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.870 Using 'verbs' RDMA provider 00:03:16.144 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:31.037 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:31.607 Creating mk/config.mk...done. 00:03:31.607 Creating mk/cc.flags.mk...done. 00:03:31.607 Type 'make' to build. 00:03:31.607 01:23:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:31.607 01:23:30 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:31.607 01:23:30 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:31.607 01:23:30 -- common/autotest_common.sh@10 -- $ set +x 00:03:31.607 ************************************ 00:03:31.607 START TEST make 00:03:31.607 ************************************ 00:03:31.607 01:23:30 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:32.175 make[1]: Nothing to be done for 'all'. 00:04:18.859 CC lib/ut/ut.o 00:04:18.859 CC lib/ut_mock/mock.o 00:04:18.859 CC lib/log/log.o 00:04:18.859 CC lib/log/log_flags.o 00:04:18.859 CC lib/log/log_deprecated.o 00:04:18.859 LIB libspdk_ut_mock.a 00:04:18.859 LIB libspdk_ut.a 00:04:18.859 LIB libspdk_log.a 00:04:18.859 SO libspdk_ut_mock.so.6.0 00:04:18.859 SO libspdk_ut.so.2.0 00:04:18.859 SO libspdk_log.so.7.0 00:04:18.859 SYMLINK libspdk_ut_mock.so 00:04:18.859 SYMLINK libspdk_ut.so 00:04:18.859 SYMLINK libspdk_log.so 00:04:18.859 CC lib/util/base64.o 00:04:18.859 CC lib/util/bit_array.o 00:04:18.859 CC lib/util/crc32.o 00:04:18.859 CC lib/util/cpuset.o 00:04:18.859 CC lib/util/crc32c.o 00:04:18.859 CC lib/util/crc16.o 00:04:18.859 CC lib/ioat/ioat.o 00:04:18.859 CC lib/dma/dma.o 00:04:18.859 CXX lib/trace_parser/trace.o 00:04:18.859 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.859 CC lib/util/crc32_ieee.o 00:04:18.859 CC lib/util/crc64.o 00:04:18.859 CC lib/util/dif.o 00:04:18.859 CC lib/vfio_user/host/vfio_user.o 00:04:18.859 CC lib/util/fd.o 00:04:18.859 LIB libspdk_dma.a 00:04:18.859 CC lib/util/fd_group.o 00:04:18.859 CC lib/util/file.o 00:04:18.859 SO libspdk_dma.so.5.0 00:04:18.859 CC lib/util/hexlify.o 00:04:18.859 LIB libspdk_ioat.a 00:04:18.859 SYMLINK libspdk_dma.so 00:04:18.859 CC lib/util/iov.o 00:04:18.859 SO libspdk_ioat.so.7.0 00:04:18.859 CC lib/util/math.o 00:04:18.859 CC lib/util/net.o 00:04:18.859 LIB libspdk_vfio_user.a 00:04:18.859 SYMLINK libspdk_ioat.so 00:04:18.859 CC lib/util/pipe.o 00:04:18.859 CC lib/util/strerror_tls.o 00:04:18.859 CC lib/util/string.o 00:04:18.859 SO libspdk_vfio_user.so.5.0 00:04:18.859 CC lib/util/uuid.o 00:04:18.859 SYMLINK libspdk_vfio_user.so 00:04:18.859 CC lib/util/xor.o 00:04:18.859 CC lib/util/zipf.o 00:04:18.859 CC lib/util/md5.o 00:04:18.859 LIB libspdk_util.a 00:04:18.859 SO libspdk_util.so.10.0 00:04:18.859 LIB libspdk_trace_parser.a 00:04:18.859 SO libspdk_trace_parser.so.6.0 00:04:18.859 SYMLINK libspdk_util.so 00:04:18.859 SYMLINK libspdk_trace_parser.so 00:04:18.859 CC lib/rdma_provider/common.o 00:04:18.859 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:18.859 CC lib/idxd/idxd.o 00:04:18.859 CC lib/idxd/idxd_user.o 00:04:18.859 CC lib/json/json_parse.o 00:04:18.859 CC lib/vmd/vmd.o 00:04:18.859 CC lib/idxd/idxd_kernel.o 00:04:18.859 CC lib/conf/conf.o 00:04:18.859 CC lib/env_dpdk/env.o 00:04:18.859 CC lib/rdma_utils/rdma_utils.o 00:04:18.859 CC lib/env_dpdk/memory.o 00:04:18.859 CC lib/json/json_util.o 00:04:18.859 LIB libspdk_rdma_provider.a 00:04:18.859 LIB libspdk_conf.a 00:04:18.859 SO libspdk_rdma_provider.so.6.0 00:04:18.859 SO libspdk_conf.so.6.0 00:04:18.859 CC lib/vmd/led.o 00:04:18.859 CC lib/env_dpdk/pci.o 00:04:18.859 SYMLINK libspdk_rdma_provider.so 00:04:18.859 CC lib/env_dpdk/init.o 00:04:18.859 LIB libspdk_rdma_utils.a 00:04:18.859 SYMLINK libspdk_conf.so 00:04:18.859 CC lib/env_dpdk/threads.o 00:04:18.859 SO libspdk_rdma_utils.so.1.0 00:04:18.859 SYMLINK libspdk_rdma_utils.so 00:04:18.859 CC lib/json/json_write.o 00:04:18.859 CC lib/env_dpdk/pci_ioat.o 00:04:18.859 CC lib/env_dpdk/pci_virtio.o 00:04:18.859 CC lib/env_dpdk/pci_vmd.o 00:04:18.859 CC lib/env_dpdk/pci_idxd.o 00:04:18.859 CC lib/env_dpdk/pci_event.o 00:04:18.859 CC lib/env_dpdk/sigbus_handler.o 00:04:18.859 CC lib/env_dpdk/pci_dpdk.o 00:04:18.859 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:18.859 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:18.859 LIB libspdk_json.a 00:04:18.859 LIB libspdk_idxd.a 00:04:18.859 LIB libspdk_vmd.a 00:04:18.859 SO libspdk_json.so.6.0 00:04:18.859 SO libspdk_idxd.so.12.1 00:04:18.859 SO libspdk_vmd.so.6.0 00:04:18.859 SYMLINK libspdk_json.so 00:04:18.859 SYMLINK libspdk_idxd.so 00:04:18.859 SYMLINK libspdk_vmd.so 00:04:18.859 CC lib/jsonrpc/jsonrpc_server.o 00:04:18.859 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:18.859 CC lib/jsonrpc/jsonrpc_client.o 00:04:18.859 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:18.859 LIB libspdk_jsonrpc.a 00:04:18.859 SO libspdk_jsonrpc.so.6.0 00:04:18.859 LIB libspdk_env_dpdk.a 00:04:18.859 SYMLINK libspdk_jsonrpc.so 00:04:18.859 SO libspdk_env_dpdk.so.15.0 00:04:18.859 SYMLINK libspdk_env_dpdk.so 00:04:18.859 CC lib/rpc/rpc.o 00:04:18.859 LIB libspdk_rpc.a 00:04:18.859 SO libspdk_rpc.so.6.0 00:04:18.859 SYMLINK libspdk_rpc.so 00:04:18.859 CC lib/trace/trace.o 00:04:18.859 CC lib/trace/trace_flags.o 00:04:18.859 CC lib/trace/trace_rpc.o 00:04:18.859 CC lib/keyring/keyring_rpc.o 00:04:18.859 CC lib/notify/notify.o 00:04:18.859 CC lib/keyring/keyring.o 00:04:18.859 CC lib/notify/notify_rpc.o 00:04:18.859 LIB libspdk_notify.a 00:04:18.859 SO libspdk_notify.so.6.0 00:04:18.859 SYMLINK libspdk_notify.so 00:04:18.859 LIB libspdk_keyring.a 00:04:18.859 LIB libspdk_trace.a 00:04:18.859 SO libspdk_trace.so.11.0 00:04:18.859 SO libspdk_keyring.so.2.0 00:04:18.859 SYMLINK libspdk_trace.so 00:04:18.859 SYMLINK libspdk_keyring.so 00:04:18.859 CC lib/thread/thread.o 00:04:18.859 CC lib/thread/iobuf.o 00:04:18.859 CC lib/sock/sock.o 00:04:18.859 CC lib/sock/sock_rpc.o 00:04:18.859 LIB libspdk_sock.a 00:04:18.859 SO libspdk_sock.so.10.0 00:04:18.859 SYMLINK libspdk_sock.so 00:04:18.859 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:18.859 CC lib/nvme/nvme_ctrlr.o 00:04:18.859 CC lib/nvme/nvme_fabric.o 00:04:18.859 CC lib/nvme/nvme_ns_cmd.o 00:04:18.859 CC lib/nvme/nvme_ns.o 00:04:18.859 CC lib/nvme/nvme_pcie_common.o 00:04:18.859 CC lib/nvme/nvme_pcie.o 00:04:18.859 CC lib/nvme/nvme.o 00:04:18.859 CC lib/nvme/nvme_qpair.o 00:04:18.859 LIB libspdk_thread.a 00:04:18.859 SO libspdk_thread.so.10.2 00:04:18.859 CC lib/nvme/nvme_quirks.o 00:04:18.859 SYMLINK libspdk_thread.so 00:04:18.859 CC lib/nvme/nvme_transport.o 00:04:18.859 CC lib/nvme/nvme_discovery.o 00:04:18.859 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:18.859 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:18.859 CC lib/nvme/nvme_tcp.o 00:04:18.859 CC lib/nvme/nvme_opal.o 00:04:18.859 CC lib/nvme/nvme_io_msg.o 00:04:18.859 CC lib/nvme/nvme_poll_group.o 00:04:18.859 CC lib/nvme/nvme_zns.o 00:04:18.859 CC lib/accel/accel.o 00:04:18.859 CC lib/blob/blobstore.o 00:04:18.859 CC lib/nvme/nvme_stubs.o 00:04:18.859 CC lib/init/json_config.o 00:04:19.119 CC lib/virtio/virtio.o 00:04:19.119 CC lib/fsdev/fsdev.o 00:04:19.119 CC lib/fsdev/fsdev_io.o 00:04:19.119 CC lib/nvme/nvme_auth.o 00:04:19.119 CC lib/init/subsystem.o 00:04:19.378 CC lib/nvme/nvme_cuse.o 00:04:19.378 CC lib/virtio/virtio_vhost_user.o 00:04:19.378 CC lib/fsdev/fsdev_rpc.o 00:04:19.378 CC lib/init/subsystem_rpc.o 00:04:19.639 CC lib/nvme/nvme_rdma.o 00:04:19.639 CC lib/init/rpc.o 00:04:19.639 LIB libspdk_fsdev.a 00:04:19.639 CC lib/virtio/virtio_vfio_user.o 00:04:19.898 LIB libspdk_init.a 00:04:19.898 SO libspdk_fsdev.so.1.0 00:04:19.898 CC lib/virtio/virtio_pci.o 00:04:19.898 SO libspdk_init.so.6.0 00:04:19.898 CC lib/blob/request.o 00:04:19.898 SYMLINK libspdk_fsdev.so 00:04:19.898 SYMLINK libspdk_init.so 00:04:19.898 CC lib/blob/zeroes.o 00:04:19.898 CC lib/accel/accel_rpc.o 00:04:19.898 CC lib/accel/accel_sw.o 00:04:19.898 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:20.158 CC lib/blob/blob_bs_dev.o 00:04:20.158 LIB libspdk_virtio.a 00:04:20.158 SO libspdk_virtio.so.7.0 00:04:20.158 SYMLINK libspdk_virtio.so 00:04:20.158 CC lib/event/reactor.o 00:04:20.158 CC lib/event/app_rpc.o 00:04:20.158 CC lib/event/scheduler_static.o 00:04:20.158 CC lib/event/app.o 00:04:20.158 CC lib/event/log_rpc.o 00:04:20.158 LIB libspdk_accel.a 00:04:20.416 SO libspdk_accel.so.16.0 00:04:20.416 SYMLINK libspdk_accel.so 00:04:20.674 CC lib/bdev/bdev.o 00:04:20.674 CC lib/bdev/scsi_nvme.o 00:04:20.674 CC lib/bdev/bdev_zone.o 00:04:20.674 CC lib/bdev/bdev_rpc.o 00:04:20.674 CC lib/bdev/part.o 00:04:20.674 LIB libspdk_event.a 00:04:20.674 LIB libspdk_fuse_dispatcher.a 00:04:20.674 SO libspdk_fuse_dispatcher.so.1.0 00:04:20.675 SO libspdk_event.so.15.0 00:04:20.933 SYMLINK libspdk_fuse_dispatcher.so 00:04:20.933 SYMLINK libspdk_event.so 00:04:20.933 LIB libspdk_nvme.a 00:04:21.191 SO libspdk_nvme.so.14.0 00:04:21.450 SYMLINK libspdk_nvme.so 00:04:22.385 LIB libspdk_blob.a 00:04:22.385 SO libspdk_blob.so.11.0 00:04:22.643 SYMLINK libspdk_blob.so 00:04:22.901 CC lib/blobfs/blobfs.o 00:04:22.901 CC lib/blobfs/tree.o 00:04:22.901 CC lib/lvol/lvol.o 00:04:23.469 LIB libspdk_bdev.a 00:04:23.469 SO libspdk_bdev.so.17.0 00:04:23.732 SYMLINK libspdk_bdev.so 00:04:23.732 LIB libspdk_blobfs.a 00:04:23.732 SO libspdk_blobfs.so.10.0 00:04:23.732 CC lib/nbd/nbd.o 00:04:23.732 CC lib/nbd/nbd_rpc.o 00:04:23.732 CC lib/ftl/ftl_core.o 00:04:23.732 CC lib/ftl/ftl_init.o 00:04:23.732 CC lib/ftl/ftl_layout.o 00:04:23.732 CC lib/ublk/ublk.o 00:04:23.732 CC lib/nvmf/ctrlr.o 00:04:23.732 SYMLINK libspdk_blobfs.so 00:04:23.732 CC lib/scsi/dev.o 00:04:23.732 CC lib/nvmf/ctrlr_discovery.o 00:04:24.002 LIB libspdk_lvol.a 00:04:24.002 CC lib/nvmf/ctrlr_bdev.o 00:04:24.002 CC lib/ftl/ftl_debug.o 00:04:24.002 SO libspdk_lvol.so.10.0 00:04:24.002 CC lib/scsi/lun.o 00:04:24.002 SYMLINK libspdk_lvol.so 00:04:24.002 CC lib/ftl/ftl_io.o 00:04:24.002 CC lib/ftl/ftl_sb.o 00:04:24.270 LIB libspdk_nbd.a 00:04:24.270 CC lib/ftl/ftl_l2p.o 00:04:24.270 CC lib/ftl/ftl_l2p_flat.o 00:04:24.270 SO libspdk_nbd.so.7.0 00:04:24.270 SYMLINK libspdk_nbd.so 00:04:24.270 CC lib/ftl/ftl_nv_cache.o 00:04:24.270 CC lib/ftl/ftl_band.o 00:04:24.270 CC lib/ftl/ftl_band_ops.o 00:04:24.270 CC lib/scsi/port.o 00:04:24.270 CC lib/nvmf/subsystem.o 00:04:24.529 CC lib/nvmf/nvmf.o 00:04:24.529 CC lib/nvmf/nvmf_rpc.o 00:04:24.529 CC lib/scsi/scsi.o 00:04:24.529 CC lib/ublk/ublk_rpc.o 00:04:24.529 CC lib/scsi/scsi_bdev.o 00:04:24.529 CC lib/ftl/ftl_writer.o 00:04:24.787 LIB libspdk_ublk.a 00:04:24.788 SO libspdk_ublk.so.3.0 00:04:24.788 CC lib/nvmf/transport.o 00:04:24.788 CC lib/ftl/ftl_rq.o 00:04:24.788 SYMLINK libspdk_ublk.so 00:04:24.788 CC lib/scsi/scsi_pr.o 00:04:24.788 CC lib/ftl/ftl_reloc.o 00:04:25.047 CC lib/ftl/ftl_l2p_cache.o 00:04:25.047 CC lib/scsi/scsi_rpc.o 00:04:25.047 CC lib/scsi/task.o 00:04:25.305 CC lib/ftl/ftl_p2l.o 00:04:25.305 CC lib/ftl/ftl_p2l_log.o 00:04:25.305 CC lib/ftl/mngt/ftl_mngt.o 00:04:25.305 CC lib/nvmf/tcp.o 00:04:25.305 LIB libspdk_scsi.a 00:04:25.305 CC lib/nvmf/stubs.o 00:04:25.305 SO libspdk_scsi.so.9.0 00:04:25.564 SYMLINK libspdk_scsi.so 00:04:25.564 CC lib/nvmf/mdns_server.o 00:04:25.564 CC lib/nvmf/rdma.o 00:04:25.564 CC lib/nvmf/auth.o 00:04:25.564 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:25.822 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:25.822 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:25.823 CC lib/iscsi/conn.o 00:04:25.823 CC lib/vhost/vhost.o 00:04:25.823 CC lib/vhost/vhost_rpc.o 00:04:25.823 CC lib/vhost/vhost_scsi.o 00:04:25.823 CC lib/vhost/vhost_blk.o 00:04:25.823 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:26.081 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:26.081 CC lib/iscsi/init_grp.o 00:04:26.340 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:26.340 CC lib/vhost/rte_vhost_user.o 00:04:26.340 CC lib/iscsi/iscsi.o 00:04:26.340 CC lib/iscsi/param.o 00:04:26.340 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:26.340 CC lib/iscsi/portal_grp.o 00:04:26.340 CC lib/iscsi/tgt_node.o 00:04:26.599 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:26.599 CC lib/iscsi/iscsi_subsystem.o 00:04:26.857 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:26.857 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.857 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:26.857 CC lib/iscsi/iscsi_rpc.o 00:04:26.857 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:26.857 CC lib/ftl/utils/ftl_conf.o 00:04:26.857 CC lib/iscsi/task.o 00:04:27.117 CC lib/ftl/utils/ftl_md.o 00:04:27.117 CC lib/ftl/utils/ftl_mempool.o 00:04:27.117 CC lib/ftl/utils/ftl_bitmap.o 00:04:27.117 CC lib/ftl/utils/ftl_property.o 00:04:27.117 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:27.117 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:27.378 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:27.378 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:27.378 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:27.378 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:27.378 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:27.378 LIB libspdk_vhost.a 00:04:27.378 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:27.378 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:27.378 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:27.378 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:27.378 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:27.378 SO libspdk_vhost.so.8.0 00:04:27.636 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:27.636 CC lib/ftl/base/ftl_base_dev.o 00:04:27.636 SYMLINK libspdk_vhost.so 00:04:27.636 CC lib/ftl/base/ftl_base_bdev.o 00:04:27.636 CC lib/ftl/ftl_trace.o 00:04:27.895 LIB libspdk_ftl.a 00:04:27.895 LIB libspdk_iscsi.a 00:04:27.895 LIB libspdk_nvmf.a 00:04:27.895 SO libspdk_iscsi.so.8.0 00:04:28.154 SO libspdk_nvmf.so.19.0 00:04:28.154 SO libspdk_ftl.so.9.0 00:04:28.154 SYMLINK libspdk_iscsi.so 00:04:28.412 SYMLINK libspdk_nvmf.so 00:04:28.412 SYMLINK libspdk_ftl.so 00:04:28.670 CC module/env_dpdk/env_dpdk_rpc.o 00:04:28.929 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:28.929 CC module/scheduler/gscheduler/gscheduler.o 00:04:28.929 CC module/keyring/file/keyring.o 00:04:28.929 CC module/fsdev/aio/fsdev_aio.o 00:04:28.929 CC module/sock/posix/posix.o 00:04:28.929 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:28.929 CC module/blob/bdev/blob_bdev.o 00:04:28.929 CC module/keyring/linux/keyring.o 00:04:28.929 CC module/accel/error/accel_error.o 00:04:28.929 LIB libspdk_env_dpdk_rpc.a 00:04:28.929 SO libspdk_env_dpdk_rpc.so.6.0 00:04:28.929 SYMLINK libspdk_env_dpdk_rpc.so 00:04:28.929 CC module/keyring/linux/keyring_rpc.o 00:04:28.929 CC module/keyring/file/keyring_rpc.o 00:04:28.929 LIB libspdk_scheduler_gscheduler.a 00:04:28.929 LIB libspdk_scheduler_dpdk_governor.a 00:04:28.929 CC module/accel/error/accel_error_rpc.o 00:04:28.929 SO libspdk_scheduler_gscheduler.so.4.0 00:04:28.929 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:28.929 LIB libspdk_scheduler_dynamic.a 00:04:28.929 SO libspdk_scheduler_dynamic.so.4.0 00:04:28.929 SYMLINK libspdk_scheduler_gscheduler.so 00:04:28.929 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:28.929 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:28.929 CC module/fsdev/aio/linux_aio_mgr.o 00:04:29.188 SYMLINK libspdk_scheduler_dynamic.so 00:04:29.188 LIB libspdk_keyring_linux.a 00:04:29.188 LIB libspdk_blob_bdev.a 00:04:29.188 LIB libspdk_keyring_file.a 00:04:29.188 LIB libspdk_accel_error.a 00:04:29.188 SO libspdk_keyring_linux.so.1.0 00:04:29.188 SO libspdk_blob_bdev.so.11.0 00:04:29.188 SO libspdk_keyring_file.so.2.0 00:04:29.188 SO libspdk_accel_error.so.2.0 00:04:29.188 SYMLINK libspdk_keyring_linux.so 00:04:29.188 SYMLINK libspdk_blob_bdev.so 00:04:29.188 SYMLINK libspdk_accel_error.so 00:04:29.188 SYMLINK libspdk_keyring_file.so 00:04:29.188 CC module/accel/ioat/accel_ioat.o 00:04:29.188 CC module/accel/ioat/accel_ioat_rpc.o 00:04:29.188 CC module/accel/dsa/accel_dsa.o 00:04:29.188 CC module/accel/dsa/accel_dsa_rpc.o 00:04:29.446 CC module/accel/iaa/accel_iaa.o 00:04:29.446 CC module/accel/iaa/accel_iaa_rpc.o 00:04:29.446 LIB libspdk_accel_ioat.a 00:04:29.446 CC module/blobfs/bdev/blobfs_bdev.o 00:04:29.446 SO libspdk_accel_ioat.so.6.0 00:04:29.446 CC module/bdev/error/vbdev_error.o 00:04:29.446 CC module/bdev/delay/vbdev_delay.o 00:04:29.446 SYMLINK libspdk_accel_ioat.so 00:04:29.446 CC module/bdev/error/vbdev_error_rpc.o 00:04:29.446 LIB libspdk_accel_dsa.a 00:04:29.446 CC module/bdev/gpt/gpt.o 00:04:29.446 LIB libspdk_fsdev_aio.a 00:04:29.446 SO libspdk_accel_dsa.so.5.0 00:04:29.446 SO libspdk_fsdev_aio.so.1.0 00:04:29.446 LIB libspdk_accel_iaa.a 00:04:29.705 LIB libspdk_sock_posix.a 00:04:29.705 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:29.705 SYMLINK libspdk_accel_dsa.so 00:04:29.705 SO libspdk_accel_iaa.so.3.0 00:04:29.705 SO libspdk_sock_posix.so.6.0 00:04:29.705 SYMLINK libspdk_fsdev_aio.so 00:04:29.705 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:29.705 CC module/bdev/lvol/vbdev_lvol.o 00:04:29.705 SYMLINK libspdk_accel_iaa.so 00:04:29.705 CC module/bdev/gpt/vbdev_gpt.o 00:04:29.705 SYMLINK libspdk_sock_posix.so 00:04:29.705 LIB libspdk_bdev_error.a 00:04:29.705 SO libspdk_bdev_error.so.6.0 00:04:29.705 LIB libspdk_blobfs_bdev.a 00:04:29.705 CC module/bdev/malloc/bdev_malloc.o 00:04:29.705 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:29.705 SYMLINK libspdk_bdev_error.so 00:04:29.705 SO libspdk_blobfs_bdev.so.6.0 00:04:29.705 CC module/bdev/null/bdev_null.o 00:04:29.705 LIB libspdk_bdev_delay.a 00:04:29.963 CC module/bdev/nvme/bdev_nvme.o 00:04:29.963 CC module/bdev/passthru/vbdev_passthru.o 00:04:29.963 SO libspdk_bdev_delay.so.6.0 00:04:29.963 SYMLINK libspdk_blobfs_bdev.so 00:04:29.963 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:29.963 SYMLINK libspdk_bdev_delay.so 00:04:29.963 CC module/bdev/null/bdev_null_rpc.o 00:04:29.963 CC module/bdev/raid/bdev_raid.o 00:04:29.963 LIB libspdk_bdev_gpt.a 00:04:29.964 SO libspdk_bdev_gpt.so.6.0 00:04:29.964 CC module/bdev/raid/bdev_raid_rpc.o 00:04:29.964 SYMLINK libspdk_bdev_gpt.so 00:04:30.222 LIB libspdk_bdev_null.a 00:04:30.222 SO libspdk_bdev_null.so.6.0 00:04:30.222 LIB libspdk_bdev_passthru.a 00:04:30.222 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:30.222 LIB libspdk_bdev_lvol.a 00:04:30.222 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:30.222 SO libspdk_bdev_passthru.so.6.0 00:04:30.222 CC module/bdev/split/vbdev_split.o 00:04:30.222 SYMLINK libspdk_bdev_null.so 00:04:30.222 SO libspdk_bdev_lvol.so.6.0 00:04:30.222 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:30.222 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:30.222 SYMLINK libspdk_bdev_passthru.so 00:04:30.222 CC module/bdev/split/vbdev_split_rpc.o 00:04:30.222 SYMLINK libspdk_bdev_lvol.so 00:04:30.222 LIB libspdk_bdev_malloc.a 00:04:30.481 SO libspdk_bdev_malloc.so.6.0 00:04:30.481 CC module/bdev/nvme/nvme_rpc.o 00:04:30.481 CC module/bdev/aio/bdev_aio.o 00:04:30.481 CC module/bdev/raid/bdev_raid_sb.o 00:04:30.481 LIB libspdk_bdev_split.a 00:04:30.481 SYMLINK libspdk_bdev_malloc.so 00:04:30.481 CC module/bdev/raid/raid0.o 00:04:30.481 CC module/bdev/ftl/bdev_ftl.o 00:04:30.481 SO libspdk_bdev_split.so.6.0 00:04:30.481 SYMLINK libspdk_bdev_split.so 00:04:30.481 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:30.481 LIB libspdk_bdev_zone_block.a 00:04:30.481 SO libspdk_bdev_zone_block.so.6.0 00:04:30.739 CC module/bdev/nvme/bdev_mdns_client.o 00:04:30.739 SYMLINK libspdk_bdev_zone_block.so 00:04:30.739 CC module/bdev/nvme/vbdev_opal.o 00:04:30.739 CC module/bdev/raid/raid1.o 00:04:30.739 CC module/bdev/raid/concat.o 00:04:30.739 LIB libspdk_bdev_ftl.a 00:04:30.739 SO libspdk_bdev_ftl.so.6.0 00:04:30.739 CC module/bdev/aio/bdev_aio_rpc.o 00:04:30.739 SYMLINK libspdk_bdev_ftl.so 00:04:30.739 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:30.739 CC module/bdev/raid/raid5f.o 00:04:30.739 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:30.998 CC module/bdev/iscsi/bdev_iscsi.o 00:04:30.998 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:30.998 LIB libspdk_bdev_aio.a 00:04:30.998 SO libspdk_bdev_aio.so.6.0 00:04:30.998 SYMLINK libspdk_bdev_aio.so 00:04:30.998 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:30.998 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:30.998 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:31.256 LIB libspdk_bdev_iscsi.a 00:04:31.256 LIB libspdk_bdev_raid.a 00:04:31.256 SO libspdk_bdev_iscsi.so.6.0 00:04:31.515 SYMLINK libspdk_bdev_iscsi.so 00:04:31.515 SO libspdk_bdev_raid.so.6.0 00:04:31.515 SYMLINK libspdk_bdev_raid.so 00:04:31.773 LIB libspdk_bdev_virtio.a 00:04:31.773 SO libspdk_bdev_virtio.so.6.0 00:04:31.773 SYMLINK libspdk_bdev_virtio.so 00:04:32.340 LIB libspdk_bdev_nvme.a 00:04:32.340 SO libspdk_bdev_nvme.so.7.0 00:04:32.598 SYMLINK libspdk_bdev_nvme.so 00:04:33.166 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:33.166 CC module/event/subsystems/fsdev/fsdev.o 00:04:33.166 CC module/event/subsystems/iobuf/iobuf.o 00:04:33.166 CC module/event/subsystems/vmd/vmd.o 00:04:33.166 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:33.166 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:33.166 CC module/event/subsystems/sock/sock.o 00:04:33.166 CC module/event/subsystems/keyring/keyring.o 00:04:33.166 CC module/event/subsystems/scheduler/scheduler.o 00:04:33.166 LIB libspdk_event_vhost_blk.a 00:04:33.166 LIB libspdk_event_keyring.a 00:04:33.166 LIB libspdk_event_vmd.a 00:04:33.166 LIB libspdk_event_iobuf.a 00:04:33.166 LIB libspdk_event_scheduler.a 00:04:33.166 LIB libspdk_event_sock.a 00:04:33.166 SO libspdk_event_vhost_blk.so.3.0 00:04:33.166 LIB libspdk_event_fsdev.a 00:04:33.166 SO libspdk_event_keyring.so.1.0 00:04:33.166 SO libspdk_event_vmd.so.6.0 00:04:33.166 SO libspdk_event_sock.so.5.0 00:04:33.166 SO libspdk_event_scheduler.so.4.0 00:04:33.166 SO libspdk_event_iobuf.so.3.0 00:04:33.166 SO libspdk_event_fsdev.so.1.0 00:04:33.166 SYMLINK libspdk_event_vhost_blk.so 00:04:33.166 SYMLINK libspdk_event_keyring.so 00:04:33.166 SYMLINK libspdk_event_sock.so 00:04:33.166 SYMLINK libspdk_event_scheduler.so 00:04:33.166 SYMLINK libspdk_event_vmd.so 00:04:33.166 SYMLINK libspdk_event_iobuf.so 00:04:33.166 SYMLINK libspdk_event_fsdev.so 00:04:33.734 CC module/event/subsystems/accel/accel.o 00:04:33.734 LIB libspdk_event_accel.a 00:04:33.734 SO libspdk_event_accel.so.6.0 00:04:33.993 SYMLINK libspdk_event_accel.so 00:04:34.251 CC module/event/subsystems/bdev/bdev.o 00:04:34.520 LIB libspdk_event_bdev.a 00:04:34.520 SO libspdk_event_bdev.so.6.0 00:04:34.520 SYMLINK libspdk_event_bdev.so 00:04:35.089 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:35.089 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:35.089 CC module/event/subsystems/nbd/nbd.o 00:04:35.089 CC module/event/subsystems/ublk/ublk.o 00:04:35.089 CC module/event/subsystems/scsi/scsi.o 00:04:35.089 LIB libspdk_event_nbd.a 00:04:35.089 LIB libspdk_event_scsi.a 00:04:35.089 LIB libspdk_event_ublk.a 00:04:35.089 SO libspdk_event_nbd.so.6.0 00:04:35.089 SO libspdk_event_scsi.so.6.0 00:04:35.089 SO libspdk_event_ublk.so.3.0 00:04:35.089 LIB libspdk_event_nvmf.a 00:04:35.089 SYMLINK libspdk_event_scsi.so 00:04:35.089 SYMLINK libspdk_event_nbd.so 00:04:35.089 SYMLINK libspdk_event_ublk.so 00:04:35.089 SO libspdk_event_nvmf.so.6.0 00:04:35.347 SYMLINK libspdk_event_nvmf.so 00:04:35.607 CC module/event/subsystems/iscsi/iscsi.o 00:04:35.607 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:35.607 LIB libspdk_event_vhost_scsi.a 00:04:35.607 LIB libspdk_event_iscsi.a 00:04:35.607 SO libspdk_event_vhost_scsi.so.3.0 00:04:35.874 SO libspdk_event_iscsi.so.6.0 00:04:35.874 SYMLINK libspdk_event_vhost_scsi.so 00:04:35.874 SYMLINK libspdk_event_iscsi.so 00:04:36.149 SO libspdk.so.6.0 00:04:36.149 SYMLINK libspdk.so 00:04:36.407 CXX app/trace/trace.o 00:04:36.407 TEST_HEADER include/spdk/accel.h 00:04:36.407 TEST_HEADER include/spdk/accel_module.h 00:04:36.407 TEST_HEADER include/spdk/assert.h 00:04:36.407 TEST_HEADER include/spdk/barrier.h 00:04:36.407 CC app/trace_record/trace_record.o 00:04:36.407 TEST_HEADER include/spdk/base64.h 00:04:36.407 TEST_HEADER include/spdk/bdev.h 00:04:36.407 TEST_HEADER include/spdk/bdev_module.h 00:04:36.407 TEST_HEADER include/spdk/bdev_zone.h 00:04:36.407 TEST_HEADER include/spdk/bit_array.h 00:04:36.407 TEST_HEADER include/spdk/bit_pool.h 00:04:36.407 TEST_HEADER include/spdk/blob_bdev.h 00:04:36.407 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:36.407 TEST_HEADER include/spdk/blobfs.h 00:04:36.407 TEST_HEADER include/spdk/blob.h 00:04:36.407 TEST_HEADER include/spdk/conf.h 00:04:36.407 TEST_HEADER include/spdk/config.h 00:04:36.407 TEST_HEADER include/spdk/cpuset.h 00:04:36.407 TEST_HEADER include/spdk/crc16.h 00:04:36.407 TEST_HEADER include/spdk/crc32.h 00:04:36.407 TEST_HEADER include/spdk/crc64.h 00:04:36.407 TEST_HEADER include/spdk/dif.h 00:04:36.407 TEST_HEADER include/spdk/dma.h 00:04:36.407 CC app/iscsi_tgt/iscsi_tgt.o 00:04:36.407 TEST_HEADER include/spdk/endian.h 00:04:36.407 TEST_HEADER include/spdk/env_dpdk.h 00:04:36.407 CC app/nvmf_tgt/nvmf_main.o 00:04:36.407 TEST_HEADER include/spdk/env.h 00:04:36.407 TEST_HEADER include/spdk/event.h 00:04:36.407 TEST_HEADER include/spdk/fd_group.h 00:04:36.407 TEST_HEADER include/spdk/fd.h 00:04:36.407 TEST_HEADER include/spdk/file.h 00:04:36.407 TEST_HEADER include/spdk/fsdev.h 00:04:36.407 TEST_HEADER include/spdk/fsdev_module.h 00:04:36.407 TEST_HEADER include/spdk/ftl.h 00:04:36.407 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:36.407 TEST_HEADER include/spdk/gpt_spec.h 00:04:36.407 TEST_HEADER include/spdk/hexlify.h 00:04:36.407 TEST_HEADER include/spdk/histogram_data.h 00:04:36.407 TEST_HEADER include/spdk/idxd.h 00:04:36.407 TEST_HEADER include/spdk/idxd_spec.h 00:04:36.407 CC test/thread/poller_perf/poller_perf.o 00:04:36.407 CC examples/util/zipf/zipf.o 00:04:36.407 TEST_HEADER include/spdk/init.h 00:04:36.407 TEST_HEADER include/spdk/ioat.h 00:04:36.407 TEST_HEADER include/spdk/ioat_spec.h 00:04:36.407 TEST_HEADER include/spdk/iscsi_spec.h 00:04:36.407 TEST_HEADER include/spdk/json.h 00:04:36.407 TEST_HEADER include/spdk/jsonrpc.h 00:04:36.407 TEST_HEADER include/spdk/keyring.h 00:04:36.407 TEST_HEADER include/spdk/keyring_module.h 00:04:36.407 TEST_HEADER include/spdk/likely.h 00:04:36.407 TEST_HEADER include/spdk/log.h 00:04:36.407 TEST_HEADER include/spdk/lvol.h 00:04:36.407 CC test/dma/test_dma/test_dma.o 00:04:36.407 TEST_HEADER include/spdk/md5.h 00:04:36.407 TEST_HEADER include/spdk/memory.h 00:04:36.407 TEST_HEADER include/spdk/mmio.h 00:04:36.407 TEST_HEADER include/spdk/nbd.h 00:04:36.407 TEST_HEADER include/spdk/net.h 00:04:36.407 TEST_HEADER include/spdk/notify.h 00:04:36.407 TEST_HEADER include/spdk/nvme.h 00:04:36.407 TEST_HEADER include/spdk/nvme_intel.h 00:04:36.407 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:36.407 CC test/app/bdev_svc/bdev_svc.o 00:04:36.407 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:36.407 TEST_HEADER include/spdk/nvme_spec.h 00:04:36.407 TEST_HEADER include/spdk/nvme_zns.h 00:04:36.407 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:36.407 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:36.407 TEST_HEADER include/spdk/nvmf.h 00:04:36.407 TEST_HEADER include/spdk/nvmf_spec.h 00:04:36.407 TEST_HEADER include/spdk/nvmf_transport.h 00:04:36.407 TEST_HEADER include/spdk/opal.h 00:04:36.407 TEST_HEADER include/spdk/opal_spec.h 00:04:36.407 TEST_HEADER include/spdk/pci_ids.h 00:04:36.407 TEST_HEADER include/spdk/pipe.h 00:04:36.407 TEST_HEADER include/spdk/queue.h 00:04:36.407 TEST_HEADER include/spdk/reduce.h 00:04:36.407 TEST_HEADER include/spdk/rpc.h 00:04:36.407 TEST_HEADER include/spdk/scheduler.h 00:04:36.407 TEST_HEADER include/spdk/scsi.h 00:04:36.407 TEST_HEADER include/spdk/scsi_spec.h 00:04:36.407 TEST_HEADER include/spdk/sock.h 00:04:36.407 TEST_HEADER include/spdk/stdinc.h 00:04:36.407 CC test/env/mem_callbacks/mem_callbacks.o 00:04:36.407 TEST_HEADER include/spdk/string.h 00:04:36.407 TEST_HEADER include/spdk/thread.h 00:04:36.407 TEST_HEADER include/spdk/trace.h 00:04:36.407 TEST_HEADER include/spdk/trace_parser.h 00:04:36.407 TEST_HEADER include/spdk/tree.h 00:04:36.408 TEST_HEADER include/spdk/ublk.h 00:04:36.408 TEST_HEADER include/spdk/util.h 00:04:36.408 TEST_HEADER include/spdk/uuid.h 00:04:36.408 TEST_HEADER include/spdk/version.h 00:04:36.408 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:36.408 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:36.408 TEST_HEADER include/spdk/vhost.h 00:04:36.408 TEST_HEADER include/spdk/vmd.h 00:04:36.408 TEST_HEADER include/spdk/xor.h 00:04:36.408 TEST_HEADER include/spdk/zipf.h 00:04:36.408 CXX test/cpp_headers/accel.o 00:04:36.408 LINK nvmf_tgt 00:04:36.408 LINK poller_perf 00:04:36.665 LINK spdk_trace_record 00:04:36.665 LINK zipf 00:04:36.665 LINK iscsi_tgt 00:04:36.665 LINK bdev_svc 00:04:36.665 CXX test/cpp_headers/accel_module.o 00:04:36.665 LINK spdk_trace 00:04:36.665 CXX test/cpp_headers/assert.o 00:04:36.665 CC test/rpc_client/rpc_client_test.o 00:04:36.924 CC examples/ioat/perf/perf.o 00:04:36.924 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:36.924 CC examples/vmd/lsvmd/lsvmd.o 00:04:36.924 CXX test/cpp_headers/barrier.o 00:04:36.924 CC examples/idxd/perf/perf.o 00:04:36.924 LINK mem_callbacks 00:04:36.924 LINK test_dma 00:04:36.924 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:36.924 LINK rpc_client_test 00:04:36.924 LINK lsvmd 00:04:36.924 CXX test/cpp_headers/base64.o 00:04:36.924 CC app/spdk_tgt/spdk_tgt.o 00:04:36.924 LINK interrupt_tgt 00:04:37.182 LINK ioat_perf 00:04:37.182 CC test/env/vtophys/vtophys.o 00:04:37.182 CC app/spdk_lspci/spdk_lspci.o 00:04:37.182 CXX test/cpp_headers/bdev.o 00:04:37.182 CC app/spdk_nvme_perf/perf.o 00:04:37.182 LINK spdk_tgt 00:04:37.182 CC examples/vmd/led/led.o 00:04:37.182 LINK idxd_perf 00:04:37.182 LINK vtophys 00:04:37.182 LINK spdk_lspci 00:04:37.182 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:37.182 CC examples/ioat/verify/verify.o 00:04:37.182 LINK nvme_fuzz 00:04:37.182 CXX test/cpp_headers/bdev_module.o 00:04:37.440 LINK led 00:04:37.440 CXX test/cpp_headers/bdev_zone.o 00:04:37.440 LINK env_dpdk_post_init 00:04:37.440 CC test/app/histogram_perf/histogram_perf.o 00:04:37.440 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:37.440 LINK verify 00:04:37.440 CXX test/cpp_headers/bit_array.o 00:04:37.440 CC test/env/memory/memory_ut.o 00:04:37.440 CC test/env/pci/pci_ut.o 00:04:37.440 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:37.698 LINK histogram_perf 00:04:37.698 CC test/event/event_perf/event_perf.o 00:04:37.698 CXX test/cpp_headers/bit_pool.o 00:04:37.698 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:37.698 CC test/nvme/aer/aer.o 00:04:37.698 LINK event_perf 00:04:37.698 CXX test/cpp_headers/blob_bdev.o 00:04:37.698 CC examples/thread/thread/thread_ex.o 00:04:37.956 CC examples/sock/hello_world/hello_sock.o 00:04:37.956 LINK pci_ut 00:04:37.956 CXX test/cpp_headers/blobfs_bdev.o 00:04:37.956 LINK aer 00:04:37.956 CC test/event/reactor/reactor.o 00:04:37.956 LINK spdk_nvme_perf 00:04:37.956 LINK thread 00:04:38.215 LINK vhost_fuzz 00:04:38.215 CXX test/cpp_headers/blobfs.o 00:04:38.215 LINK reactor 00:04:38.215 CXX test/cpp_headers/blob.o 00:04:38.215 LINK hello_sock 00:04:38.215 CC test/nvme/reset/reset.o 00:04:38.215 CXX test/cpp_headers/conf.o 00:04:38.215 CC app/spdk_nvme_identify/identify.o 00:04:38.215 CC test/nvme/sgl/sgl.o 00:04:38.215 CC test/event/reactor_perf/reactor_perf.o 00:04:38.473 CXX test/cpp_headers/config.o 00:04:38.473 CC examples/accel/perf/accel_perf.o 00:04:38.473 CC test/accel/dif/dif.o 00:04:38.473 CXX test/cpp_headers/cpuset.o 00:04:38.473 CC examples/blob/hello_world/hello_blob.o 00:04:38.473 LINK reset 00:04:38.473 LINK reactor_perf 00:04:38.473 LINK memory_ut 00:04:38.473 LINK sgl 00:04:38.473 CXX test/cpp_headers/crc16.o 00:04:38.732 CXX test/cpp_headers/crc32.o 00:04:38.732 LINK hello_blob 00:04:38.732 CC test/event/app_repeat/app_repeat.o 00:04:38.732 CXX test/cpp_headers/crc64.o 00:04:38.732 CC test/nvme/e2edp/nvme_dp.o 00:04:38.732 LINK app_repeat 00:04:38.991 CXX test/cpp_headers/dif.o 00:04:38.991 CC test/blobfs/mkfs/mkfs.o 00:04:38.991 CC examples/nvme/hello_world/hello_world.o 00:04:38.991 LINK accel_perf 00:04:38.991 CC examples/blob/cli/blobcli.o 00:04:38.991 CXX test/cpp_headers/dma.o 00:04:38.991 LINK dif 00:04:38.991 LINK mkfs 00:04:38.991 LINK nvme_dp 00:04:38.991 CC test/event/scheduler/scheduler.o 00:04:39.250 LINK iscsi_fuzz 00:04:39.250 LINK hello_world 00:04:39.250 CC examples/nvme/reconnect/reconnect.o 00:04:39.250 CXX test/cpp_headers/endian.o 00:04:39.250 LINK spdk_nvme_identify 00:04:39.250 CC test/nvme/overhead/overhead.o 00:04:39.250 LINK scheduler 00:04:39.250 CC app/spdk_nvme_discover/discovery_aer.o 00:04:39.250 CXX test/cpp_headers/env_dpdk.o 00:04:39.250 CC app/spdk_top/spdk_top.o 00:04:39.250 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:39.508 CC test/app/jsoncat/jsoncat.o 00:04:39.508 LINK blobcli 00:04:39.508 CC examples/nvme/arbitration/arbitration.o 00:04:39.508 CXX test/cpp_headers/env.o 00:04:39.508 LINK reconnect 00:04:39.508 LINK spdk_nvme_discover 00:04:39.508 LINK jsoncat 00:04:39.508 LINK overhead 00:04:39.508 CC examples/nvme/hotplug/hotplug.o 00:04:39.508 CXX test/cpp_headers/event.o 00:04:39.508 CXX test/cpp_headers/fd_group.o 00:04:39.767 CXX test/cpp_headers/fd.o 00:04:39.767 CC test/app/stub/stub.o 00:04:39.767 LINK arbitration 00:04:39.767 CC app/vhost/vhost.o 00:04:39.767 CXX test/cpp_headers/file.o 00:04:39.767 LINK hotplug 00:04:39.767 CC test/nvme/err_injection/err_injection.o 00:04:39.767 LINK nvme_manage 00:04:39.767 CC test/nvme/startup/startup.o 00:04:40.026 LINK stub 00:04:40.026 CC app/spdk_dd/spdk_dd.o 00:04:40.026 LINK vhost 00:04:40.026 CXX test/cpp_headers/fsdev.o 00:04:40.026 LINK startup 00:04:40.026 LINK err_injection 00:04:40.026 CC test/nvme/reserve/reserve.o 00:04:40.026 CXX test/cpp_headers/fsdev_module.o 00:04:40.026 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:40.026 CXX test/cpp_headers/ftl.o 00:04:40.026 CC app/fio/nvme/fio_plugin.o 00:04:40.285 LINK spdk_top 00:04:40.285 LINK spdk_dd 00:04:40.285 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:40.285 LINK reserve 00:04:40.285 LINK cmb_copy 00:04:40.285 CC test/nvme/simple_copy/simple_copy.o 00:04:40.285 CXX test/cpp_headers/fuse_dispatcher.o 00:04:40.285 CC test/nvme/connect_stress/connect_stress.o 00:04:40.285 CC test/nvme/boot_partition/boot_partition.o 00:04:40.543 CXX test/cpp_headers/gpt_spec.o 00:04:40.543 CXX test/cpp_headers/hexlify.o 00:04:40.543 CXX test/cpp_headers/histogram_data.o 00:04:40.543 LINK connect_stress 00:04:40.543 CC examples/nvme/abort/abort.o 00:04:40.543 LINK boot_partition 00:04:40.543 LINK simple_copy 00:04:40.543 LINK hello_fsdev 00:04:40.544 CC examples/bdev/hello_world/hello_bdev.o 00:04:40.544 CXX test/cpp_headers/idxd.o 00:04:40.802 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:40.802 LINK spdk_nvme 00:04:40.802 CC examples/bdev/bdevperf/bdevperf.o 00:04:40.802 CC test/nvme/compliance/nvme_compliance.o 00:04:40.802 CXX test/cpp_headers/idxd_spec.o 00:04:40.802 LINK hello_bdev 00:04:40.802 CC test/nvme/fused_ordering/fused_ordering.o 00:04:40.802 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:40.802 CC test/nvme/fdp/fdp.o 00:04:40.802 LINK abort 00:04:40.802 LINK pmr_persistence 00:04:40.802 CC app/fio/bdev/fio_plugin.o 00:04:40.802 CXX test/cpp_headers/init.o 00:04:41.061 LINK doorbell_aers 00:04:41.061 LINK fused_ordering 00:04:41.061 CXX test/cpp_headers/ioat.o 00:04:41.061 LINK nvme_compliance 00:04:41.061 CXX test/cpp_headers/ioat_spec.o 00:04:41.061 CXX test/cpp_headers/iscsi_spec.o 00:04:41.061 LINK fdp 00:04:41.061 CC test/nvme/cuse/cuse.o 00:04:41.061 CC test/lvol/esnap/esnap.o 00:04:41.061 CXX test/cpp_headers/json.o 00:04:41.320 CC test/bdev/bdevio/bdevio.o 00:04:41.320 CXX test/cpp_headers/jsonrpc.o 00:04:41.320 CXX test/cpp_headers/keyring.o 00:04:41.320 CXX test/cpp_headers/keyring_module.o 00:04:41.320 CXX test/cpp_headers/likely.o 00:04:41.320 CXX test/cpp_headers/log.o 00:04:41.320 LINK spdk_bdev 00:04:41.320 CXX test/cpp_headers/lvol.o 00:04:41.320 CXX test/cpp_headers/md5.o 00:04:41.320 CXX test/cpp_headers/memory.o 00:04:41.320 CXX test/cpp_headers/mmio.o 00:04:41.320 CXX test/cpp_headers/nbd.o 00:04:41.579 CXX test/cpp_headers/net.o 00:04:41.579 CXX test/cpp_headers/notify.o 00:04:41.579 CXX test/cpp_headers/nvme.o 00:04:41.579 CXX test/cpp_headers/nvme_intel.o 00:04:41.579 LINK bdevperf 00:04:41.579 CXX test/cpp_headers/nvme_ocssd.o 00:04:41.579 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:41.579 LINK bdevio 00:04:41.579 CXX test/cpp_headers/nvme_spec.o 00:04:41.579 CXX test/cpp_headers/nvme_zns.o 00:04:41.579 CXX test/cpp_headers/nvmf_cmd.o 00:04:41.579 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:41.837 CXX test/cpp_headers/nvmf.o 00:04:41.837 CXX test/cpp_headers/nvmf_spec.o 00:04:41.837 CXX test/cpp_headers/nvmf_transport.o 00:04:41.837 CXX test/cpp_headers/opal.o 00:04:41.837 CXX test/cpp_headers/opal_spec.o 00:04:41.837 CXX test/cpp_headers/pci_ids.o 00:04:41.837 CXX test/cpp_headers/pipe.o 00:04:41.837 CXX test/cpp_headers/queue.o 00:04:41.837 CXX test/cpp_headers/reduce.o 00:04:41.837 CXX test/cpp_headers/rpc.o 00:04:41.837 CC examples/nvmf/nvmf/nvmf.o 00:04:41.837 CXX test/cpp_headers/scheduler.o 00:04:41.837 CXX test/cpp_headers/scsi.o 00:04:41.837 CXX test/cpp_headers/scsi_spec.o 00:04:42.096 CXX test/cpp_headers/sock.o 00:04:42.096 CXX test/cpp_headers/stdinc.o 00:04:42.096 CXX test/cpp_headers/string.o 00:04:42.096 CXX test/cpp_headers/thread.o 00:04:42.096 CXX test/cpp_headers/trace.o 00:04:42.096 CXX test/cpp_headers/trace_parser.o 00:04:42.096 CXX test/cpp_headers/tree.o 00:04:42.096 CXX test/cpp_headers/ublk.o 00:04:42.096 CXX test/cpp_headers/util.o 00:04:42.096 CXX test/cpp_headers/uuid.o 00:04:42.096 CXX test/cpp_headers/version.o 00:04:42.096 CXX test/cpp_headers/vfio_user_pci.o 00:04:42.096 CXX test/cpp_headers/vfio_user_spec.o 00:04:42.096 CXX test/cpp_headers/vhost.o 00:04:42.096 LINK nvmf 00:04:42.355 CXX test/cpp_headers/vmd.o 00:04:42.355 LINK cuse 00:04:42.355 CXX test/cpp_headers/xor.o 00:04:42.355 CXX test/cpp_headers/zipf.o 00:04:46.542 LINK esnap 00:04:46.801 00:04:46.801 real 1m15.260s 00:04:46.801 user 5m41.874s 00:04:46.801 sys 1m10.753s 00:04:46.801 01:24:45 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:46.801 01:24:45 make -- common/autotest_common.sh@10 -- $ set +x 00:04:46.801 ************************************ 00:04:46.801 END TEST make 00:04:46.801 ************************************ 00:04:46.801 01:24:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:46.801 01:24:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:46.801 01:24:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:46.801 01:24:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.801 01:24:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:46.801 01:24:45 -- pm/common@44 -- $ pid=6192 00:04:46.801 01:24:45 -- pm/common@50 -- $ kill -TERM 6192 00:04:46.801 01:24:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:46.801 01:24:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:46.801 01:24:45 -- pm/common@44 -- $ pid=6193 00:04:46.801 01:24:45 -- pm/common@50 -- $ kill -TERM 6193 00:04:47.062 01:24:45 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:47.062 01:24:45 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:47.062 01:24:45 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:47.062 01:24:45 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:47.062 01:24:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.062 01:24:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.062 01:24:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.062 01:24:45 -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.062 01:24:45 -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.062 01:24:45 -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.062 01:24:45 -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.062 01:24:45 -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.062 01:24:45 -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.062 01:24:45 -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.062 01:24:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.062 01:24:45 -- scripts/common.sh@344 -- # case "$op" in 00:04:47.062 01:24:45 -- scripts/common.sh@345 -- # : 1 00:04:47.062 01:24:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.062 01:24:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.062 01:24:45 -- scripts/common.sh@365 -- # decimal 1 00:04:47.062 01:24:45 -- scripts/common.sh@353 -- # local d=1 00:04:47.062 01:24:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.062 01:24:45 -- scripts/common.sh@355 -- # echo 1 00:04:47.062 01:24:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.062 01:24:45 -- scripts/common.sh@366 -- # decimal 2 00:04:47.062 01:24:45 -- scripts/common.sh@353 -- # local d=2 00:04:47.062 01:24:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.062 01:24:45 -- scripts/common.sh@355 -- # echo 2 00:04:47.062 01:24:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.062 01:24:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.062 01:24:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.062 01:24:45 -- scripts/common.sh@368 -- # return 0 00:04:47.062 01:24:45 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.062 01:24:45 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:47.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.062 --rc genhtml_branch_coverage=1 00:04:47.062 --rc genhtml_function_coverage=1 00:04:47.062 --rc genhtml_legend=1 00:04:47.062 --rc geninfo_all_blocks=1 00:04:47.062 --rc geninfo_unexecuted_blocks=1 00:04:47.062 00:04:47.062 ' 00:04:47.062 01:24:45 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:47.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.062 --rc genhtml_branch_coverage=1 00:04:47.062 --rc genhtml_function_coverage=1 00:04:47.062 --rc genhtml_legend=1 00:04:47.062 --rc geninfo_all_blocks=1 00:04:47.062 --rc geninfo_unexecuted_blocks=1 00:04:47.062 00:04:47.062 ' 00:04:47.062 01:24:45 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:47.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.062 --rc genhtml_branch_coverage=1 00:04:47.062 --rc genhtml_function_coverage=1 00:04:47.062 --rc genhtml_legend=1 00:04:47.062 --rc geninfo_all_blocks=1 00:04:47.062 --rc geninfo_unexecuted_blocks=1 00:04:47.062 00:04:47.062 ' 00:04:47.062 01:24:45 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:47.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.062 --rc genhtml_branch_coverage=1 00:04:47.062 --rc genhtml_function_coverage=1 00:04:47.062 --rc genhtml_legend=1 00:04:47.062 --rc geninfo_all_blocks=1 00:04:47.062 --rc geninfo_unexecuted_blocks=1 00:04:47.062 00:04:47.062 ' 00:04:47.062 01:24:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.062 01:24:45 -- nvmf/common.sh@7 -- # uname -s 00:04:47.062 01:24:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.062 01:24:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.062 01:24:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.062 01:24:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.062 01:24:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.062 01:24:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.062 01:24:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.062 01:24:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.062 01:24:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.062 01:24:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.062 01:24:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cf31f245-87c1-4e89-8524-8663cf6c0ef5 00:04:47.062 01:24:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=cf31f245-87c1-4e89-8524-8663cf6c0ef5 00:04:47.062 01:24:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.062 01:24:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.062 01:24:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.062 01:24:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.062 01:24:45 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.062 01:24:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.062 01:24:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.062 01:24:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.062 01:24:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.062 01:24:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.062 01:24:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.062 01:24:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.062 01:24:45 -- paths/export.sh@5 -- # export PATH 00:04:47.062 01:24:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.062 01:24:45 -- nvmf/common.sh@51 -- # : 0 00:04:47.062 01:24:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.062 01:24:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.062 01:24:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.062 01:24:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.062 01:24:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.062 01:24:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.062 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.062 01:24:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.062 01:24:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.062 01:24:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.062 01:24:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:47.062 01:24:45 -- spdk/autotest.sh@32 -- # uname -s 00:04:47.062 01:24:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:47.062 01:24:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:47.062 01:24:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:47.322 01:24:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:47.322 01:24:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:47.322 01:24:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:47.322 01:24:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:47.322 01:24:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:47.322 01:24:46 -- spdk/autotest.sh@48 -- # udevadm_pid=67712 00:04:47.322 01:24:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:47.322 01:24:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:47.322 01:24:46 -- pm/common@17 -- # local monitor 00:04:47.322 01:24:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.322 01:24:46 -- pm/common@21 -- # date +%s 00:04:47.322 01:24:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728437086 00:04:47.322 01:24:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:47.322 01:24:46 -- pm/common@25 -- # sleep 1 00:04:47.322 01:24:46 -- pm/common@21 -- # date +%s 00:04:47.322 01:24:46 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728437086 00:04:47.322 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728437086_collect-cpu-load.pm.log 00:04:47.322 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728437086_collect-vmstat.pm.log 00:04:48.261 01:24:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:48.261 01:24:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:48.261 01:24:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:48.261 01:24:47 -- common/autotest_common.sh@10 -- # set +x 00:04:48.261 01:24:47 -- spdk/autotest.sh@59 -- # create_test_list 00:04:48.261 01:24:47 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:48.261 01:24:47 -- common/autotest_common.sh@10 -- # set +x 00:04:48.261 01:24:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:48.261 01:24:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:48.261 01:24:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:48.261 01:24:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:48.261 01:24:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:48.261 01:24:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:48.261 01:24:47 -- common/autotest_common.sh@1455 -- # uname 00:04:48.261 01:24:47 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:48.261 01:24:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:48.261 01:24:47 -- common/autotest_common.sh@1475 -- # uname 00:04:48.261 01:24:47 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:48.261 01:24:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:48.261 01:24:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:48.520 lcov: LCOV version 1.15 00:04:48.520 01:24:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:03.411 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:03.411 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:18.297 01:25:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:18.297 01:25:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:18.297 01:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:18.297 01:25:16 -- spdk/autotest.sh@78 -- # rm -f 00:05:18.297 01:25:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.297 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:18.297 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:18.297 01:25:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:18.297 01:25:16 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:18.297 01:25:16 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:18.297 01:25:16 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:18.297 01:25:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:18.297 01:25:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:18.297 01:25:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:18.297 01:25:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.297 01:25:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:18.297 01:25:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:18.297 01:25:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:18.297 01:25:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:18.297 01:25:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:18.297 01:25:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:18.297 01:25:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:18.297 01:25:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:18.297 01:25:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:18.297 01:25:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:18.297 01:25:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:18.297 01:25:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:18.297 01:25:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:18.297 01:25:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:18.297 01:25:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:18.297 01:25:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:18.297 01:25:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:18.297 01:25:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.297 01:25:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.297 01:25:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:18.297 01:25:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:18.297 01:25:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:18.297 No valid GPT data, bailing 00:05:18.297 01:25:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.297 01:25:17 -- scripts/common.sh@394 -- # pt= 00:05:18.297 01:25:17 -- scripts/common.sh@395 -- # return 1 00:05:18.297 01:25:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:18.297 1+0 records in 00:05:18.297 1+0 records out 00:05:18.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00880305 s, 119 MB/s 00:05:18.297 01:25:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.297 01:25:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.297 01:25:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:18.297 01:25:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:18.297 01:25:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:18.297 No valid GPT data, bailing 00:05:18.297 01:25:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:18.297 01:25:17 -- scripts/common.sh@394 -- # pt= 00:05:18.297 01:25:17 -- scripts/common.sh@395 -- # return 1 00:05:18.297 01:25:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:18.297 1+0 records in 00:05:18.297 1+0 records out 00:05:18.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497561 s, 211 MB/s 00:05:18.297 01:25:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.297 01:25:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.297 01:25:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:18.297 01:25:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:18.297 01:25:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:18.557 No valid GPT data, bailing 00:05:18.557 01:25:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:18.557 01:25:17 -- scripts/common.sh@394 -- # pt= 00:05:18.557 01:25:17 -- scripts/common.sh@395 -- # return 1 00:05:18.557 01:25:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:18.557 1+0 records in 00:05:18.557 1+0 records out 00:05:18.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393847 s, 266 MB/s 00:05:18.558 01:25:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:18.558 01:25:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:18.558 01:25:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:18.558 01:25:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:18.558 01:25:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:18.558 No valid GPT data, bailing 00:05:18.558 01:25:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:18.558 01:25:17 -- scripts/common.sh@394 -- # pt= 00:05:18.558 01:25:17 -- scripts/common.sh@395 -- # return 1 00:05:18.558 01:25:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:18.558 1+0 records in 00:05:18.558 1+0 records out 00:05:18.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642773 s, 163 MB/s 00:05:18.558 01:25:17 -- spdk/autotest.sh@105 -- # sync 00:05:18.558 01:25:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:18.558 01:25:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:18.558 01:25:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:21.850 01:25:20 -- spdk/autotest.sh@111 -- # uname -s 00:05:21.850 01:25:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:21.850 01:25:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:21.850 01:25:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:22.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.419 Hugepages 00:05:22.419 node hugesize free / total 00:05:22.419 node0 1048576kB 0 / 0 00:05:22.419 node0 2048kB 0 / 0 00:05:22.419 00:05:22.420 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.420 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:22.679 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:22.679 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:22.679 01:25:21 -- spdk/autotest.sh@117 -- # uname -s 00:05:22.679 01:25:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:22.679 01:25:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:22.679 01:25:21 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.617 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.617 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.617 01:25:22 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:25.002 01:25:23 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:25.002 01:25:23 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:25.002 01:25:23 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:25.002 01:25:23 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:25.002 01:25:23 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:25.002 01:25:23 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:25.002 01:25:23 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.002 01:25:23 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.002 01:25:23 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:25.002 01:25:23 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:25.002 01:25:23 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:25.002 01:25:23 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.265 Waiting for block devices as requested 00:05:25.525 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.525 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.525 01:25:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:25.525 01:25:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:25.525 01:25:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:25.525 01:25:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:25.525 01:25:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.525 01:25:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:25.526 01:25:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:25.526 01:25:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:25.526 01:25:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:25.526 01:25:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:25.526 01:25:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:25.526 01:25:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:25.526 01:25:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:25.526 01:25:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:25.526 01:25:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:25.526 01:25:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:25.526 01:25:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:25.526 01:25:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:25.526 01:25:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:25.526 01:25:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:25.526 01:25:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:25.526 01:25:24 -- common/autotest_common.sh@1541 -- # continue 00:05:25.526 01:25:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:25.526 01:25:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:25.526 01:25:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:25.526 01:25:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:25.526 01:25:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.526 01:25:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:25.526 01:25:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:25.526 01:25:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:25.526 01:25:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:25.526 01:25:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:25.526 01:25:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:25.526 01:25:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:25.526 01:25:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:25.786 01:25:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:25.786 01:25:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:25.786 01:25:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:25.786 01:25:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:25.786 01:25:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:25.786 01:25:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:25.786 01:25:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:25.786 01:25:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:25.786 01:25:24 -- common/autotest_common.sh@1541 -- # continue 00:05:25.786 01:25:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:25.786 01:25:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:25.786 01:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.786 01:25:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:25.786 01:25:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.786 01:25:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.786 01:25:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.616 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.616 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:26.616 01:25:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:26.616 01:25:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.616 01:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:26.616 01:25:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:26.616 01:25:25 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:26.616 01:25:25 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:26.616 01:25:25 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:26.616 01:25:25 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:26.616 01:25:25 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:26.616 01:25:25 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:26.876 01:25:25 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:26.876 01:25:25 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:26.876 01:25:25 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:26.876 01:25:25 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.876 01:25:25 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:26.876 01:25:25 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:26.876 01:25:25 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:26.876 01:25:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:26.876 01:25:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:26.876 01:25:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:26.876 01:25:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:26.876 01:25:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.876 01:25:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:26.876 01:25:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:26.876 01:25:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:26.876 01:25:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:26.876 01:25:25 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:26.876 01:25:25 -- common/autotest_common.sh@1570 -- # return 0 00:05:26.876 01:25:25 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:26.876 01:25:25 -- common/autotest_common.sh@1578 -- # return 0 00:05:26.876 01:25:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:26.876 01:25:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:26.876 01:25:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.876 01:25:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:26.876 01:25:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:26.876 01:25:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.876 01:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:26.876 01:25:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:26.876 01:25:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.876 01:25:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.876 01:25:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.876 01:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:26.876 ************************************ 00:05:26.876 START TEST env 00:05:26.876 ************************************ 00:05:26.876 01:25:25 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:26.876 * Looking for test storage... 00:05:27.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.146 01:25:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.146 01:25:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.146 01:25:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.146 01:25:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.146 01:25:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.146 01:25:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.146 01:25:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.146 01:25:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.146 01:25:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.146 01:25:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.146 01:25:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.146 01:25:25 env -- scripts/common.sh@344 -- # case "$op" in 00:05:27.146 01:25:25 env -- scripts/common.sh@345 -- # : 1 00:05:27.146 01:25:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.146 01:25:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.146 01:25:25 env -- scripts/common.sh@365 -- # decimal 1 00:05:27.146 01:25:25 env -- scripts/common.sh@353 -- # local d=1 00:05:27.146 01:25:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.146 01:25:25 env -- scripts/common.sh@355 -- # echo 1 00:05:27.146 01:25:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.146 01:25:25 env -- scripts/common.sh@366 -- # decimal 2 00:05:27.146 01:25:25 env -- scripts/common.sh@353 -- # local d=2 00:05:27.146 01:25:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.146 01:25:25 env -- scripts/common.sh@355 -- # echo 2 00:05:27.146 01:25:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.146 01:25:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.146 01:25:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.146 01:25:25 env -- scripts/common.sh@368 -- # return 0 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.146 --rc genhtml_branch_coverage=1 00:05:27.146 --rc genhtml_function_coverage=1 00:05:27.146 --rc genhtml_legend=1 00:05:27.146 --rc geninfo_all_blocks=1 00:05:27.146 --rc geninfo_unexecuted_blocks=1 00:05:27.146 00:05:27.146 ' 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.146 --rc genhtml_branch_coverage=1 00:05:27.146 --rc genhtml_function_coverage=1 00:05:27.146 --rc genhtml_legend=1 00:05:27.146 --rc geninfo_all_blocks=1 00:05:27.146 --rc geninfo_unexecuted_blocks=1 00:05:27.146 00:05:27.146 ' 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.146 --rc genhtml_branch_coverage=1 00:05:27.146 --rc genhtml_function_coverage=1 00:05:27.146 --rc genhtml_legend=1 00:05:27.146 --rc geninfo_all_blocks=1 00:05:27.146 --rc geninfo_unexecuted_blocks=1 00:05:27.146 00:05:27.146 ' 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.146 --rc genhtml_branch_coverage=1 00:05:27.146 --rc genhtml_function_coverage=1 00:05:27.146 --rc genhtml_legend=1 00:05:27.146 --rc geninfo_all_blocks=1 00:05:27.146 --rc geninfo_unexecuted_blocks=1 00:05:27.146 00:05:27.146 ' 00:05:27.146 01:25:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.146 01:25:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.146 01:25:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.146 ************************************ 00:05:27.146 START TEST env_memory 00:05:27.146 ************************************ 00:05:27.146 01:25:25 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.146 00:05:27.146 00:05:27.146 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.146 http://cunit.sourceforge.net/ 00:05:27.146 00:05:27.146 00:05:27.146 Suite: memory 00:05:27.146 Test: alloc and free memory map ...[2024-10-09 01:25:25.950963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:27.146 passed 00:05:27.146 Test: mem map translation ...[2024-10-09 01:25:25.993854] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:27.147 [2024-10-09 01:25:25.993922] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:27.147 [2024-10-09 01:25:25.993978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:27.147 [2024-10-09 01:25:25.993998] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:27.408 passed 00:05:27.408 Test: mem map registration ...[2024-10-09 01:25:26.059254] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:27.408 [2024-10-09 01:25:26.059327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:27.408 passed 00:05:27.408 Test: mem map adjacent registrations ...passed 00:05:27.408 00:05:27.408 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.408 suites 1 1 n/a 0 0 00:05:27.408 tests 4 4 4 0 0 00:05:27.408 asserts 152 152 152 0 n/a 00:05:27.408 00:05:27.408 Elapsed time = 0.235 seconds 00:05:27.408 00:05:27.408 real 0m0.289s 00:05:27.408 user 0m0.251s 00:05:27.408 sys 0m0.026s 00:05:27.408 01:25:26 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.408 01:25:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:27.408 ************************************ 00:05:27.408 END TEST env_memory 00:05:27.408 ************************************ 00:05:27.408 01:25:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.408 01:25:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.408 01:25:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.408 01:25:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.408 ************************************ 00:05:27.408 START TEST env_vtophys 00:05:27.408 ************************************ 00:05:27.408 01:25:26 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:27.408 EAL: lib.eal log level changed from notice to debug 00:05:27.408 EAL: Detected lcore 0 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 1 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 2 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 3 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 4 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 5 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 6 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 7 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 8 as core 0 on socket 0 00:05:27.408 EAL: Detected lcore 9 as core 0 on socket 0 00:05:27.408 EAL: Maximum logical cores by configuration: 128 00:05:27.408 EAL: Detected CPU lcores: 10 00:05:27.408 EAL: Detected NUMA nodes: 1 00:05:27.408 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:27.408 EAL: Detected shared linkage of DPDK 00:05:27.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:27.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:27.408 EAL: Registered [vdev] bus. 00:05:27.408 EAL: bus.vdev log level changed from disabled to notice 00:05:27.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:27.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:27.408 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:27.408 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:27.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:27.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:27.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:27.408 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:27.669 EAL: No shared files mode enabled, IPC will be disabled 00:05:27.669 EAL: No shared files mode enabled, IPC is disabled 00:05:27.669 EAL: Selected IOVA mode 'PA' 00:05:27.669 EAL: Probing VFIO support... 00:05:27.669 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.669 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:27.669 EAL: Ask a virtual area of 0x2e000 bytes 00:05:27.669 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:27.669 EAL: Setting up physically contiguous memory... 00:05:27.669 EAL: Setting maximum number of open files to 524288 00:05:27.669 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:27.669 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:27.669 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.669 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:27.669 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.669 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.669 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:27.669 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:27.669 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.669 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:27.669 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.669 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.669 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:27.669 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:27.669 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.669 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:27.669 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.669 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.669 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:27.669 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:27.669 EAL: Ask a virtual area of 0x61000 bytes 00:05:27.669 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:27.669 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:27.669 EAL: Ask a virtual area of 0x400000000 bytes 00:05:27.669 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:27.669 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:27.669 EAL: Hugepages will be freed exactly as allocated. 00:05:27.669 EAL: No shared files mode enabled, IPC is disabled 00:05:27.669 EAL: No shared files mode enabled, IPC is disabled 00:05:27.669 EAL: TSC frequency is ~2294600 KHz 00:05:27.669 EAL: Main lcore 0 is ready (tid=7fc58cefda40;cpuset=[0]) 00:05:27.669 EAL: Trying to obtain current memory policy. 00:05:27.669 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.669 EAL: Restoring previous memory policy: 0 00:05:27.669 EAL: request: mp_malloc_sync 00:05:27.669 EAL: No shared files mode enabled, IPC is disabled 00:05:27.669 EAL: Heap on socket 0 was expanded by 2MB 00:05:27.669 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:27.669 EAL: No shared files mode enabled, IPC is disabled 00:05:27.669 EAL: Mem event callback 'spdk:(nil)' registered 00:05:27.669 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:27.669 00:05:27.669 00:05:27.669 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.669 http://cunit.sourceforge.net/ 00:05:27.669 00:05:27.669 00:05:27.669 Suite: components_suite 00:05:27.929 Test: vtophys_malloc_test ...passed 00:05:27.929 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:27.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.929 EAL: Restoring previous memory policy: 4 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was expanded by 4MB 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was shrunk by 4MB 00:05:27.929 EAL: Trying to obtain current memory policy. 00:05:27.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.929 EAL: Restoring previous memory policy: 4 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was expanded by 6MB 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was shrunk by 6MB 00:05:27.929 EAL: Trying to obtain current memory policy. 00:05:27.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.929 EAL: Restoring previous memory policy: 4 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was expanded by 10MB 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was shrunk by 10MB 00:05:27.929 EAL: Trying to obtain current memory policy. 00:05:27.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.929 EAL: Restoring previous memory policy: 4 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was expanded by 18MB 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was shrunk by 18MB 00:05:27.929 EAL: Trying to obtain current memory policy. 00:05:27.929 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:27.929 EAL: Restoring previous memory policy: 4 00:05:27.929 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.929 EAL: request: mp_malloc_sync 00:05:27.929 EAL: No shared files mode enabled, IPC is disabled 00:05:27.929 EAL: Heap on socket 0 was expanded by 34MB 00:05:28.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.189 EAL: request: mp_malloc_sync 00:05:28.189 EAL: No shared files mode enabled, IPC is disabled 00:05:28.189 EAL: Heap on socket 0 was shrunk by 34MB 00:05:28.189 EAL: Trying to obtain current memory policy. 00:05:28.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.189 EAL: Restoring previous memory policy: 4 00:05:28.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.189 EAL: request: mp_malloc_sync 00:05:28.189 EAL: No shared files mode enabled, IPC is disabled 00:05:28.189 EAL: Heap on socket 0 was expanded by 66MB 00:05:28.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.189 EAL: request: mp_malloc_sync 00:05:28.189 EAL: No shared files mode enabled, IPC is disabled 00:05:28.189 EAL: Heap on socket 0 was shrunk by 66MB 00:05:28.189 EAL: Trying to obtain current memory policy. 00:05:28.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.189 EAL: Restoring previous memory policy: 4 00:05:28.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.189 EAL: request: mp_malloc_sync 00:05:28.189 EAL: No shared files mode enabled, IPC is disabled 00:05:28.189 EAL: Heap on socket 0 was expanded by 130MB 00:05:28.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.189 EAL: request: mp_malloc_sync 00:05:28.189 EAL: No shared files mode enabled, IPC is disabled 00:05:28.189 EAL: Heap on socket 0 was shrunk by 130MB 00:05:28.189 EAL: Trying to obtain current memory policy. 00:05:28.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.189 EAL: Restoring previous memory policy: 4 00:05:28.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.189 EAL: request: mp_malloc_sync 00:05:28.189 EAL: No shared files mode enabled, IPC is disabled 00:05:28.189 EAL: Heap on socket 0 was expanded by 258MB 00:05:28.189 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.189 EAL: request: mp_malloc_sync 00:05:28.189 EAL: No shared files mode enabled, IPC is disabled 00:05:28.189 EAL: Heap on socket 0 was shrunk by 258MB 00:05:28.189 EAL: Trying to obtain current memory policy. 00:05:28.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.450 EAL: Restoring previous memory policy: 4 00:05:28.450 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.450 EAL: request: mp_malloc_sync 00:05:28.450 EAL: No shared files mode enabled, IPC is disabled 00:05:28.450 EAL: Heap on socket 0 was expanded by 514MB 00:05:28.450 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.450 EAL: request: mp_malloc_sync 00:05:28.450 EAL: No shared files mode enabled, IPC is disabled 00:05:28.450 EAL: Heap on socket 0 was shrunk by 514MB 00:05:28.450 EAL: Trying to obtain current memory policy. 00:05:28.450 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.709 EAL: Restoring previous memory policy: 4 00:05:28.709 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.709 EAL: request: mp_malloc_sync 00:05:28.709 EAL: No shared files mode enabled, IPC is disabled 00:05:28.709 EAL: Heap on socket 0 was expanded by 1026MB 00:05:28.969 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.230 EAL: request: mp_malloc_sync 00:05:29.230 EAL: No shared files mode enabled, IPC is disabled 00:05:29.230 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.230 passed 00:05:29.230 00:05:29.230 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.230 suites 1 1 n/a 0 0 00:05:29.230 tests 2 2 2 0 0 00:05:29.230 asserts 5274 5274 5274 0 n/a 00:05:29.230 00:05:29.230 Elapsed time = 1.389 seconds 00:05:29.230 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.230 EAL: request: mp_malloc_sync 00:05:29.230 EAL: No shared files mode enabled, IPC is disabled 00:05:29.230 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.230 EAL: No shared files mode enabled, IPC is disabled 00:05:29.230 EAL: No shared files mode enabled, IPC is disabled 00:05:29.230 EAL: No shared files mode enabled, IPC is disabled 00:05:29.230 00:05:29.230 real 0m1.659s 00:05:29.230 user 0m0.779s 00:05:29.230 sys 0m0.746s 00:05:29.230 01:25:27 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.230 01:25:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:29.230 ************************************ 00:05:29.230 END TEST env_vtophys 00:05:29.230 ************************************ 00:05:29.230 01:25:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.230 01:25:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.230 01:25:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.230 01:25:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.230 ************************************ 00:05:29.230 START TEST env_pci 00:05:29.230 ************************************ 00:05:29.230 01:25:27 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.230 00:05:29.230 00:05:29.230 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.230 http://cunit.sourceforge.net/ 00:05:29.230 00:05:29.230 00:05:29.230 Suite: pci 00:05:29.230 Test: pci_hook ...[2024-10-09 01:25:27.996264] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69963 has claimed it 00:05:29.230 passed 00:05:29.230 00:05:29.230 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.230 suites 1 1 n/a 0 0 00:05:29.230 tests 1 1 1 0 0 00:05:29.230 asserts 25 25 25 0 n/a 00:05:29.230 00:05:29.230 Elapsed time = 0.007 seconds 00:05:29.230 EAL: Cannot find device (10000:00:01.0) 00:05:29.230 EAL: Failed to attach device on primary process 00:05:29.230 00:05:29.230 real 0m0.082s 00:05:29.230 user 0m0.037s 00:05:29.230 sys 0m0.044s 00:05:29.230 01:25:28 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.230 01:25:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:29.230 ************************************ 00:05:29.230 END TEST env_pci 00:05:29.230 ************************************ 00:05:29.230 01:25:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.231 01:25:28 env -- env/env.sh@15 -- # uname 00:05:29.231 01:25:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.231 01:25:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.231 01:25:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.231 01:25:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:29.231 01:25:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.231 01:25:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.231 ************************************ 00:05:29.231 START TEST env_dpdk_post_init 00:05:29.231 ************************************ 00:05:29.231 01:25:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.490 EAL: Detected CPU lcores: 10 00:05:29.490 EAL: Detected NUMA nodes: 1 00:05:29.490 EAL: Detected shared linkage of DPDK 00:05:29.490 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.490 EAL: Selected IOVA mode 'PA' 00:05:29.490 Starting DPDK initialization... 00:05:29.490 Starting SPDK post initialization... 00:05:29.490 SPDK NVMe probe 00:05:29.490 Attaching to 0000:00:10.0 00:05:29.490 Attaching to 0000:00:11.0 00:05:29.490 Attached to 0000:00:10.0 00:05:29.490 Attached to 0000:00:11.0 00:05:29.490 Cleaning up... 00:05:29.490 00:05:29.490 real 0m0.258s 00:05:29.490 user 0m0.071s 00:05:29.490 sys 0m0.088s 00:05:29.490 01:25:28 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.490 01:25:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.490 ************************************ 00:05:29.490 END TEST env_dpdk_post_init 00:05:29.490 ************************************ 00:05:29.750 01:25:28 env -- env/env.sh@26 -- # uname 00:05:29.750 01:25:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:29.750 01:25:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.750 01:25:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.750 01:25:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.750 01:25:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.750 ************************************ 00:05:29.750 START TEST env_mem_callbacks 00:05:29.750 ************************************ 00:05:29.750 01:25:28 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:29.750 EAL: Detected CPU lcores: 10 00:05:29.750 EAL: Detected NUMA nodes: 1 00:05:29.750 EAL: Detected shared linkage of DPDK 00:05:29.750 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.750 EAL: Selected IOVA mode 'PA' 00:05:29.750 00:05:29.750 00:05:29.750 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.750 http://cunit.sourceforge.net/ 00:05:29.750 00:05:29.750 00:05:29.750 Suite: memory 00:05:29.750 Test: test ... 00:05:29.750 register 0x200000200000 2097152 00:05:29.750 malloc 3145728 00:05:29.750 register 0x200000400000 4194304 00:05:29.750 buf 0x200000500000 len 3145728 PASSED 00:05:29.750 malloc 64 00:05:29.750 buf 0x2000004fff40 len 64 PASSED 00:05:29.750 malloc 4194304 00:05:29.750 register 0x200000800000 6291456 00:05:29.750 buf 0x200000a00000 len 4194304 PASSED 00:05:29.750 free 0x200000500000 3145728 00:05:29.750 free 0x2000004fff40 64 00:05:29.750 unregister 0x200000400000 4194304 PASSED 00:05:29.750 free 0x200000a00000 4194304 00:05:29.751 unregister 0x200000800000 6291456 PASSED 00:05:29.751 malloc 8388608 00:05:29.751 register 0x200000400000 10485760 00:05:29.751 buf 0x200000600000 len 8388608 PASSED 00:05:29.751 free 0x200000600000 8388608 00:05:29.751 unregister 0x200000400000 10485760 PASSED 00:05:29.751 passed 00:05:29.751 00:05:29.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.751 suites 1 1 n/a 0 0 00:05:29.751 tests 1 1 1 0 0 00:05:29.751 asserts 15 15 15 0 n/a 00:05:29.751 00:05:29.751 Elapsed time = 0.008 seconds 00:05:30.010 00:05:30.010 real 0m0.194s 00:05:30.010 user 0m0.034s 00:05:30.010 sys 0m0.059s 00:05:30.010 01:25:28 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.010 01:25:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:30.010 ************************************ 00:05:30.010 END TEST env_mem_callbacks 00:05:30.010 ************************************ 00:05:30.010 00:05:30.010 real 0m3.062s 00:05:30.010 user 0m1.417s 00:05:30.010 sys 0m1.323s 00:05:30.010 01:25:28 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.010 01:25:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.010 ************************************ 00:05:30.010 END TEST env 00:05:30.010 ************************************ 00:05:30.010 01:25:28 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.010 01:25:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.010 01:25:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.010 01:25:28 -- common/autotest_common.sh@10 -- # set +x 00:05:30.010 ************************************ 00:05:30.011 START TEST rpc 00:05:30.011 ************************************ 00:05:30.011 01:25:28 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.011 * Looking for test storage... 00:05:30.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.011 01:25:28 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.011 01:25:28 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.011 01:25:28 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.271 01:25:28 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.271 01:25:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.271 01:25:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.271 01:25:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.271 01:25:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.271 01:25:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.271 01:25:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.271 01:25:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.271 01:25:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.271 01:25:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.271 01:25:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.271 01:25:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.271 01:25:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.271 01:25:28 rpc -- scripts/common.sh@345 -- # : 1 00:05:30.271 01:25:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.271 01:25:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.271 01:25:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.271 01:25:28 rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.271 01:25:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.271 01:25:28 rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.271 01:25:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.271 01:25:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.271 01:25:28 rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.271 01:25:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.271 01:25:28 rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.271 01:25:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.271 01:25:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.271 01:25:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.271 01:25:28 rpc -- scripts/common.sh@368 -- # return 0 00:05:30.271 01:25:28 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.271 01:25:28 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.271 --rc genhtml_branch_coverage=1 00:05:30.271 --rc genhtml_function_coverage=1 00:05:30.271 --rc genhtml_legend=1 00:05:30.271 --rc geninfo_all_blocks=1 00:05:30.271 --rc geninfo_unexecuted_blocks=1 00:05:30.271 00:05:30.271 ' 00:05:30.271 01:25:28 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.271 --rc genhtml_branch_coverage=1 00:05:30.271 --rc genhtml_function_coverage=1 00:05:30.271 --rc genhtml_legend=1 00:05:30.271 --rc geninfo_all_blocks=1 00:05:30.271 --rc geninfo_unexecuted_blocks=1 00:05:30.271 00:05:30.271 ' 00:05:30.271 01:25:29 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.271 --rc genhtml_branch_coverage=1 00:05:30.271 --rc genhtml_function_coverage=1 00:05:30.271 --rc genhtml_legend=1 00:05:30.271 --rc geninfo_all_blocks=1 00:05:30.271 --rc geninfo_unexecuted_blocks=1 00:05:30.271 00:05:30.271 ' 00:05:30.271 01:25:29 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.271 --rc genhtml_branch_coverage=1 00:05:30.271 --rc genhtml_function_coverage=1 00:05:30.271 --rc genhtml_legend=1 00:05:30.271 --rc geninfo_all_blocks=1 00:05:30.271 --rc geninfo_unexecuted_blocks=1 00:05:30.271 00:05:30.271 ' 00:05:30.271 01:25:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70090 00:05:30.271 01:25:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:30.271 01:25:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.271 01:25:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70090 00:05:30.271 01:25:29 rpc -- common/autotest_common.sh@831 -- # '[' -z 70090 ']' 00:05:30.271 01:25:29 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.271 01:25:29 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.271 01:25:29 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.271 01:25:29 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.271 01:25:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.271 [2024-10-09 01:25:29.110299] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:30.272 [2024-10-09 01:25:29.110461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70090 ] 00:05:30.532 [2024-10-09 01:25:29.252117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.532 [2024-10-09 01:25:29.281867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.532 [2024-10-09 01:25:29.334431] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.532 [2024-10-09 01:25:29.334499] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70090' to capture a snapshot of events at runtime. 00:05:30.532 [2024-10-09 01:25:29.334509] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.532 [2024-10-09 01:25:29.334527] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.532 [2024-10-09 01:25:29.334536] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70090 for offline analysis/debug. 00:05:30.532 [2024-10-09 01:25:29.334929] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.102 01:25:29 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.102 01:25:29 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.102 01:25:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.102 01:25:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.102 01:25:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:31.102 01:25:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:31.102 01:25:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.102 01:25:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.102 01:25:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.102 ************************************ 00:05:31.102 START TEST rpc_integrity 00:05:31.102 ************************************ 00:05:31.102 01:25:29 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:31.102 01:25:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.102 01:25:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.102 01:25:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.102 01:25:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.102 01:25:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.102 01:25:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.361 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.361 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.361 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.361 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.361 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.361 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:31.361 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.362 { 00:05:31.362 "name": "Malloc0", 00:05:31.362 "aliases": [ 00:05:31.362 "2b4434a1-c0c6-4a22-9f4b-7b2628f47b63" 00:05:31.362 ], 00:05:31.362 "product_name": "Malloc disk", 00:05:31.362 "block_size": 512, 00:05:31.362 "num_blocks": 16384, 00:05:31.362 "uuid": "2b4434a1-c0c6-4a22-9f4b-7b2628f47b63", 00:05:31.362 "assigned_rate_limits": { 00:05:31.362 "rw_ios_per_sec": 0, 00:05:31.362 "rw_mbytes_per_sec": 0, 00:05:31.362 "r_mbytes_per_sec": 0, 00:05:31.362 "w_mbytes_per_sec": 0 00:05:31.362 }, 00:05:31.362 "claimed": false, 00:05:31.362 "zoned": false, 00:05:31.362 "supported_io_types": { 00:05:31.362 "read": true, 00:05:31.362 "write": true, 00:05:31.362 "unmap": true, 00:05:31.362 "flush": true, 00:05:31.362 "reset": true, 00:05:31.362 "nvme_admin": false, 00:05:31.362 "nvme_io": false, 00:05:31.362 "nvme_io_md": false, 00:05:31.362 "write_zeroes": true, 00:05:31.362 "zcopy": true, 00:05:31.362 "get_zone_info": false, 00:05:31.362 "zone_management": false, 00:05:31.362 "zone_append": false, 00:05:31.362 "compare": false, 00:05:31.362 "compare_and_write": false, 00:05:31.362 "abort": true, 00:05:31.362 "seek_hole": false, 00:05:31.362 "seek_data": false, 00:05:31.362 "copy": true, 00:05:31.362 "nvme_iov_md": false 00:05:31.362 }, 00:05:31.362 "memory_domains": [ 00:05:31.362 { 00:05:31.362 "dma_device_id": "system", 00:05:31.362 "dma_device_type": 1 00:05:31.362 }, 00:05:31.362 { 00:05:31.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.362 "dma_device_type": 2 00:05:31.362 } 00:05:31.362 ], 00:05:31.362 "driver_specific": {} 00:05:31.362 } 00:05:31.362 ]' 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.362 [2024-10-09 01:25:30.103609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:31.362 [2024-10-09 01:25:30.103719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.362 [2024-10-09 01:25:30.103755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:31.362 [2024-10-09 01:25:30.103770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.362 [2024-10-09 01:25:30.106434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.362 [2024-10-09 01:25:30.106492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.362 Passthru0 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.362 { 00:05:31.362 "name": "Malloc0", 00:05:31.362 "aliases": [ 00:05:31.362 "2b4434a1-c0c6-4a22-9f4b-7b2628f47b63" 00:05:31.362 ], 00:05:31.362 "product_name": "Malloc disk", 00:05:31.362 "block_size": 512, 00:05:31.362 "num_blocks": 16384, 00:05:31.362 "uuid": "2b4434a1-c0c6-4a22-9f4b-7b2628f47b63", 00:05:31.362 "assigned_rate_limits": { 00:05:31.362 "rw_ios_per_sec": 0, 00:05:31.362 "rw_mbytes_per_sec": 0, 00:05:31.362 "r_mbytes_per_sec": 0, 00:05:31.362 "w_mbytes_per_sec": 0 00:05:31.362 }, 00:05:31.362 "claimed": true, 00:05:31.362 "claim_type": "exclusive_write", 00:05:31.362 "zoned": false, 00:05:31.362 "supported_io_types": { 00:05:31.362 "read": true, 00:05:31.362 "write": true, 00:05:31.362 "unmap": true, 00:05:31.362 "flush": true, 00:05:31.362 "reset": true, 00:05:31.362 "nvme_admin": false, 00:05:31.362 "nvme_io": false, 00:05:31.362 "nvme_io_md": false, 00:05:31.362 "write_zeroes": true, 00:05:31.362 "zcopy": true, 00:05:31.362 "get_zone_info": false, 00:05:31.362 "zone_management": false, 00:05:31.362 "zone_append": false, 00:05:31.362 "compare": false, 00:05:31.362 "compare_and_write": false, 00:05:31.362 "abort": true, 00:05:31.362 "seek_hole": false, 00:05:31.362 "seek_data": false, 00:05:31.362 "copy": true, 00:05:31.362 "nvme_iov_md": false 00:05:31.362 }, 00:05:31.362 "memory_domains": [ 00:05:31.362 { 00:05:31.362 "dma_device_id": "system", 00:05:31.362 "dma_device_type": 1 00:05:31.362 }, 00:05:31.362 { 00:05:31.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.362 "dma_device_type": 2 00:05:31.362 } 00:05:31.362 ], 00:05:31.362 "driver_specific": {} 00:05:31.362 }, 00:05:31.362 { 00:05:31.362 "name": "Passthru0", 00:05:31.362 "aliases": [ 00:05:31.362 "013814fa-06f3-565b-a75d-e78261028a85" 00:05:31.362 ], 00:05:31.362 "product_name": "passthru", 00:05:31.362 "block_size": 512, 00:05:31.362 "num_blocks": 16384, 00:05:31.362 "uuid": "013814fa-06f3-565b-a75d-e78261028a85", 00:05:31.362 "assigned_rate_limits": { 00:05:31.362 "rw_ios_per_sec": 0, 00:05:31.362 "rw_mbytes_per_sec": 0, 00:05:31.362 "r_mbytes_per_sec": 0, 00:05:31.362 "w_mbytes_per_sec": 0 00:05:31.362 }, 00:05:31.362 "claimed": false, 00:05:31.362 "zoned": false, 00:05:31.362 "supported_io_types": { 00:05:31.362 "read": true, 00:05:31.362 "write": true, 00:05:31.362 "unmap": true, 00:05:31.362 "flush": true, 00:05:31.362 "reset": true, 00:05:31.362 "nvme_admin": false, 00:05:31.362 "nvme_io": false, 00:05:31.362 "nvme_io_md": false, 00:05:31.362 "write_zeroes": true, 00:05:31.362 "zcopy": true, 00:05:31.362 "get_zone_info": false, 00:05:31.362 "zone_management": false, 00:05:31.362 "zone_append": false, 00:05:31.362 "compare": false, 00:05:31.362 "compare_and_write": false, 00:05:31.362 "abort": true, 00:05:31.362 "seek_hole": false, 00:05:31.362 "seek_data": false, 00:05:31.362 "copy": true, 00:05:31.362 "nvme_iov_md": false 00:05:31.362 }, 00:05:31.362 "memory_domains": [ 00:05:31.362 { 00:05:31.362 "dma_device_id": "system", 00:05:31.362 "dma_device_type": 1 00:05:31.362 }, 00:05:31.362 { 00:05:31.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.362 "dma_device_type": 2 00:05:31.362 } 00:05:31.362 ], 00:05:31.362 "driver_specific": { 00:05:31.362 "passthru": { 00:05:31.362 "name": "Passthru0", 00:05:31.362 "base_bdev_name": "Malloc0" 00:05:31.362 } 00:05:31.362 } 00:05:31.362 } 00:05:31.362 ]' 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.362 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.362 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.622 01:25:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.622 00:05:31.622 real 0m0.338s 00:05:31.622 user 0m0.204s 00:05:31.622 sys 0m0.062s 00:05:31.622 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.623 01:25:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.623 ************************************ 00:05:31.623 END TEST rpc_integrity 00:05:31.623 ************************************ 00:05:31.623 01:25:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.623 01:25:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.623 01:25:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.623 01:25:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.623 ************************************ 00:05:31.623 START TEST rpc_plugins 00:05:31.623 ************************************ 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.623 { 00:05:31.623 "name": "Malloc1", 00:05:31.623 "aliases": [ 00:05:31.623 "bb76a4f6-5621-40be-bdff-18cbc21830fc" 00:05:31.623 ], 00:05:31.623 "product_name": "Malloc disk", 00:05:31.623 "block_size": 4096, 00:05:31.623 "num_blocks": 256, 00:05:31.623 "uuid": "bb76a4f6-5621-40be-bdff-18cbc21830fc", 00:05:31.623 "assigned_rate_limits": { 00:05:31.623 "rw_ios_per_sec": 0, 00:05:31.623 "rw_mbytes_per_sec": 0, 00:05:31.623 "r_mbytes_per_sec": 0, 00:05:31.623 "w_mbytes_per_sec": 0 00:05:31.623 }, 00:05:31.623 "claimed": false, 00:05:31.623 "zoned": false, 00:05:31.623 "supported_io_types": { 00:05:31.623 "read": true, 00:05:31.623 "write": true, 00:05:31.623 "unmap": true, 00:05:31.623 "flush": true, 00:05:31.623 "reset": true, 00:05:31.623 "nvme_admin": false, 00:05:31.623 "nvme_io": false, 00:05:31.623 "nvme_io_md": false, 00:05:31.623 "write_zeroes": true, 00:05:31.623 "zcopy": true, 00:05:31.623 "get_zone_info": false, 00:05:31.623 "zone_management": false, 00:05:31.623 "zone_append": false, 00:05:31.623 "compare": false, 00:05:31.623 "compare_and_write": false, 00:05:31.623 "abort": true, 00:05:31.623 "seek_hole": false, 00:05:31.623 "seek_data": false, 00:05:31.623 "copy": true, 00:05:31.623 "nvme_iov_md": false 00:05:31.623 }, 00:05:31.623 "memory_domains": [ 00:05:31.623 { 00:05:31.623 "dma_device_id": "system", 00:05:31.623 "dma_device_type": 1 00:05:31.623 }, 00:05:31.623 { 00:05:31.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.623 "dma_device_type": 2 00:05:31.623 } 00:05:31.623 ], 00:05:31.623 "driver_specific": {} 00:05:31.623 } 00:05:31.623 ]' 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.623 01:25:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.623 00:05:31.623 real 0m0.158s 00:05:31.623 user 0m0.097s 00:05:31.623 sys 0m0.025s 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.623 01:25:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.623 ************************************ 00:05:31.623 END TEST rpc_plugins 00:05:31.623 ************************************ 00:05:31.891 01:25:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.891 01:25:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.891 01:25:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.891 01:25:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.891 ************************************ 00:05:31.891 START TEST rpc_trace_cmd_test 00:05:31.891 ************************************ 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:31.892 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70090", 00:05:31.892 "tpoint_group_mask": "0x8", 00:05:31.892 "iscsi_conn": { 00:05:31.892 "mask": "0x2", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "scsi": { 00:05:31.892 "mask": "0x4", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "bdev": { 00:05:31.892 "mask": "0x8", 00:05:31.892 "tpoint_mask": "0xffffffffffffffff" 00:05:31.892 }, 00:05:31.892 "nvmf_rdma": { 00:05:31.892 "mask": "0x10", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "nvmf_tcp": { 00:05:31.892 "mask": "0x20", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "ftl": { 00:05:31.892 "mask": "0x40", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "blobfs": { 00:05:31.892 "mask": "0x80", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "dsa": { 00:05:31.892 "mask": "0x200", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "thread": { 00:05:31.892 "mask": "0x400", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "nvme_pcie": { 00:05:31.892 "mask": "0x800", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "iaa": { 00:05:31.892 "mask": "0x1000", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "nvme_tcp": { 00:05:31.892 "mask": "0x2000", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "bdev_nvme": { 00:05:31.892 "mask": "0x4000", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "sock": { 00:05:31.892 "mask": "0x8000", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "blob": { 00:05:31.892 "mask": "0x10000", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "bdev_raid": { 00:05:31.892 "mask": "0x20000", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 }, 00:05:31.892 "scheduler": { 00:05:31.892 "mask": "0x40000", 00:05:31.892 "tpoint_mask": "0x0" 00:05:31.892 } 00:05:31.892 }' 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.892 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:32.165 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:32.165 01:25:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:32.165 00:05:32.165 real 0m0.232s 00:05:32.165 user 0m0.183s 00:05:32.165 sys 0m0.037s 00:05:32.165 01:25:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.165 01:25:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.165 ************************************ 00:05:32.165 END TEST rpc_trace_cmd_test 00:05:32.165 ************************************ 00:05:32.165 01:25:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:32.165 01:25:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:32.165 01:25:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:32.165 01:25:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.165 01:25:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.165 01:25:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.165 ************************************ 00:05:32.165 START TEST rpc_daemon_integrity 00:05:32.165 ************************************ 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.165 { 00:05:32.165 "name": "Malloc2", 00:05:32.165 "aliases": [ 00:05:32.165 "92d7d5b4-213e-41e8-ad14-0382e3c10907" 00:05:32.165 ], 00:05:32.165 "product_name": "Malloc disk", 00:05:32.165 "block_size": 512, 00:05:32.165 "num_blocks": 16384, 00:05:32.165 "uuid": "92d7d5b4-213e-41e8-ad14-0382e3c10907", 00:05:32.165 "assigned_rate_limits": { 00:05:32.165 "rw_ios_per_sec": 0, 00:05:32.165 "rw_mbytes_per_sec": 0, 00:05:32.165 "r_mbytes_per_sec": 0, 00:05:32.165 "w_mbytes_per_sec": 0 00:05:32.165 }, 00:05:32.165 "claimed": false, 00:05:32.165 "zoned": false, 00:05:32.165 "supported_io_types": { 00:05:32.165 "read": true, 00:05:32.165 "write": true, 00:05:32.165 "unmap": true, 00:05:32.165 "flush": true, 00:05:32.165 "reset": true, 00:05:32.165 "nvme_admin": false, 00:05:32.165 "nvme_io": false, 00:05:32.165 "nvme_io_md": false, 00:05:32.165 "write_zeroes": true, 00:05:32.165 "zcopy": true, 00:05:32.165 "get_zone_info": false, 00:05:32.165 "zone_management": false, 00:05:32.165 "zone_append": false, 00:05:32.165 "compare": false, 00:05:32.165 "compare_and_write": false, 00:05:32.165 "abort": true, 00:05:32.165 "seek_hole": false, 00:05:32.165 "seek_data": false, 00:05:32.165 "copy": true, 00:05:32.165 "nvme_iov_md": false 00:05:32.165 }, 00:05:32.165 "memory_domains": [ 00:05:32.165 { 00:05:32.165 "dma_device_id": "system", 00:05:32.165 "dma_device_type": 1 00:05:32.165 }, 00:05:32.165 { 00:05:32.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.165 "dma_device_type": 2 00:05:32.165 } 00:05:32.165 ], 00:05:32.165 "driver_specific": {} 00:05:32.165 } 00:05:32.165 ]' 00:05:32.165 01:25:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.165 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.166 [2024-10-09 01:25:31.016639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:32.166 [2024-10-09 01:25:31.016720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.166 [2024-10-09 01:25:31.016746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:32.166 [2024-10-09 01:25:31.016758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.166 [2024-10-09 01:25:31.019143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.166 [2024-10-09 01:25:31.019193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.166 Passthru0 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.166 { 00:05:32.166 "name": "Malloc2", 00:05:32.166 "aliases": [ 00:05:32.166 "92d7d5b4-213e-41e8-ad14-0382e3c10907" 00:05:32.166 ], 00:05:32.166 "product_name": "Malloc disk", 00:05:32.166 "block_size": 512, 00:05:32.166 "num_blocks": 16384, 00:05:32.166 "uuid": "92d7d5b4-213e-41e8-ad14-0382e3c10907", 00:05:32.166 "assigned_rate_limits": { 00:05:32.166 "rw_ios_per_sec": 0, 00:05:32.166 "rw_mbytes_per_sec": 0, 00:05:32.166 "r_mbytes_per_sec": 0, 00:05:32.166 "w_mbytes_per_sec": 0 00:05:32.166 }, 00:05:32.166 "claimed": true, 00:05:32.166 "claim_type": "exclusive_write", 00:05:32.166 "zoned": false, 00:05:32.166 "supported_io_types": { 00:05:32.166 "read": true, 00:05:32.166 "write": true, 00:05:32.166 "unmap": true, 00:05:32.166 "flush": true, 00:05:32.166 "reset": true, 00:05:32.166 "nvme_admin": false, 00:05:32.166 "nvme_io": false, 00:05:32.166 "nvme_io_md": false, 00:05:32.166 "write_zeroes": true, 00:05:32.166 "zcopy": true, 00:05:32.166 "get_zone_info": false, 00:05:32.166 "zone_management": false, 00:05:32.166 "zone_append": false, 00:05:32.166 "compare": false, 00:05:32.166 "compare_and_write": false, 00:05:32.166 "abort": true, 00:05:32.166 "seek_hole": false, 00:05:32.166 "seek_data": false, 00:05:32.166 "copy": true, 00:05:32.166 "nvme_iov_md": false 00:05:32.166 }, 00:05:32.166 "memory_domains": [ 00:05:32.166 { 00:05:32.166 "dma_device_id": "system", 00:05:32.166 "dma_device_type": 1 00:05:32.166 }, 00:05:32.166 { 00:05:32.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.166 "dma_device_type": 2 00:05:32.166 } 00:05:32.166 ], 00:05:32.166 "driver_specific": {} 00:05:32.166 }, 00:05:32.166 { 00:05:32.166 "name": "Passthru0", 00:05:32.166 "aliases": [ 00:05:32.166 "60cde32e-a7f6-57bb-8c85-844d5e10cd74" 00:05:32.166 ], 00:05:32.166 "product_name": "passthru", 00:05:32.166 "block_size": 512, 00:05:32.166 "num_blocks": 16384, 00:05:32.166 "uuid": "60cde32e-a7f6-57bb-8c85-844d5e10cd74", 00:05:32.166 "assigned_rate_limits": { 00:05:32.166 "rw_ios_per_sec": 0, 00:05:32.166 "rw_mbytes_per_sec": 0, 00:05:32.166 "r_mbytes_per_sec": 0, 00:05:32.166 "w_mbytes_per_sec": 0 00:05:32.166 }, 00:05:32.166 "claimed": false, 00:05:32.166 "zoned": false, 00:05:32.166 "supported_io_types": { 00:05:32.166 "read": true, 00:05:32.166 "write": true, 00:05:32.166 "unmap": true, 00:05:32.166 "flush": true, 00:05:32.166 "reset": true, 00:05:32.166 "nvme_admin": false, 00:05:32.166 "nvme_io": false, 00:05:32.166 "nvme_io_md": false, 00:05:32.166 "write_zeroes": true, 00:05:32.166 "zcopy": true, 00:05:32.166 "get_zone_info": false, 00:05:32.166 "zone_management": false, 00:05:32.166 "zone_append": false, 00:05:32.166 "compare": false, 00:05:32.166 "compare_and_write": false, 00:05:32.166 "abort": true, 00:05:32.166 "seek_hole": false, 00:05:32.166 "seek_data": false, 00:05:32.166 "copy": true, 00:05:32.166 "nvme_iov_md": false 00:05:32.166 }, 00:05:32.166 "memory_domains": [ 00:05:32.166 { 00:05:32.166 "dma_device_id": "system", 00:05:32.166 "dma_device_type": 1 00:05:32.166 }, 00:05:32.166 { 00:05:32.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.166 "dma_device_type": 2 00:05:32.166 } 00:05:32.166 ], 00:05:32.166 "driver_specific": { 00:05:32.166 "passthru": { 00:05:32.166 "name": "Passthru0", 00:05:32.166 "base_bdev_name": "Malloc2" 00:05:32.166 } 00:05:32.166 } 00:05:32.166 } 00:05:32.166 ]' 00:05:32.166 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.426 00:05:32.426 real 0m0.313s 00:05:32.426 user 0m0.188s 00:05:32.426 sys 0m0.055s 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.426 01:25:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.426 ************************************ 00:05:32.426 END TEST rpc_daemon_integrity 00:05:32.426 ************************************ 00:05:32.426 01:25:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:32.426 01:25:31 rpc -- rpc/rpc.sh@84 -- # killprocess 70090 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@950 -- # '[' -z 70090 ']' 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@954 -- # kill -0 70090 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70090 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.426 killing process with pid 70090 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70090' 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@969 -- # kill 70090 00:05:32.426 01:25:31 rpc -- common/autotest_common.sh@974 -- # wait 70090 00:05:32.995 00:05:32.995 real 0m2.909s 00:05:32.995 user 0m3.447s 00:05:32.995 sys 0m0.930s 00:05:32.995 01:25:31 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.995 01:25:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.995 ************************************ 00:05:32.995 END TEST rpc 00:05:32.995 ************************************ 00:05:32.995 01:25:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.995 01:25:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.995 01:25:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.995 01:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:32.995 ************************************ 00:05:32.995 START TEST skip_rpc 00:05:32.995 ************************************ 00:05:32.995 01:25:31 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.995 * Looking for test storage... 00:05:32.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.995 01:25:31 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.995 01:25:31 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.995 01:25:31 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:33.255 01:25:31 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.255 01:25:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:33.255 01:25:31 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.255 01:25:31 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:33.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.255 --rc genhtml_branch_coverage=1 00:05:33.255 --rc genhtml_function_coverage=1 00:05:33.255 --rc genhtml_legend=1 00:05:33.255 --rc geninfo_all_blocks=1 00:05:33.255 --rc geninfo_unexecuted_blocks=1 00:05:33.255 00:05:33.255 ' 00:05:33.255 01:25:31 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:33.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.255 --rc genhtml_branch_coverage=1 00:05:33.255 --rc genhtml_function_coverage=1 00:05:33.255 --rc genhtml_legend=1 00:05:33.255 --rc geninfo_all_blocks=1 00:05:33.255 --rc geninfo_unexecuted_blocks=1 00:05:33.255 00:05:33.256 ' 00:05:33.256 01:25:31 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:33.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.256 --rc genhtml_branch_coverage=1 00:05:33.256 --rc genhtml_function_coverage=1 00:05:33.256 --rc genhtml_legend=1 00:05:33.256 --rc geninfo_all_blocks=1 00:05:33.256 --rc geninfo_unexecuted_blocks=1 00:05:33.256 00:05:33.256 ' 00:05:33.256 01:25:31 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:33.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.256 --rc genhtml_branch_coverage=1 00:05:33.256 --rc genhtml_function_coverage=1 00:05:33.256 --rc genhtml_legend=1 00:05:33.256 --rc geninfo_all_blocks=1 00:05:33.256 --rc geninfo_unexecuted_blocks=1 00:05:33.256 00:05:33.256 ' 00:05:33.256 01:25:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.256 01:25:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:33.256 01:25:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:33.256 01:25:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.256 01:25:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.256 01:25:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.256 ************************************ 00:05:33.256 START TEST skip_rpc 00:05:33.256 ************************************ 00:05:33.256 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:33.256 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70292 00:05:33.256 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.256 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:33.256 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:33.256 [2024-10-09 01:25:32.084404] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:33.256 [2024-10-09 01:25:32.084537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70292 ] 00:05:33.515 [2024-10-09 01:25:32.215207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.515 [2024-10-09 01:25:32.236588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.515 [2024-10-09 01:25:32.284341] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.794 01:25:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70292 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70292 ']' 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70292 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70292 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70292' 00:05:38.794 killing process with pid 70292 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70292 00:05:38.794 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70292 00:05:39.054 00:05:39.054 real 0m5.719s 00:05:39.054 user 0m5.309s 00:05:39.054 sys 0m0.337s 00:05:39.054 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.054 01:25:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.054 ************************************ 00:05:39.054 END TEST skip_rpc 00:05:39.054 ************************************ 00:05:39.054 01:25:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:39.054 01:25:37 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.054 01:25:37 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.054 01:25:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.054 ************************************ 00:05:39.054 START TEST skip_rpc_with_json 00:05:39.054 ************************************ 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70385 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70385 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70385 ']' 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.054 01:25:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.054 [2024-10-09 01:25:37.869006] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:39.054 [2024-10-09 01:25:37.869154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70385 ] 00:05:39.314 [2024-10-09 01:25:38.001154] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:39.314 [2024-10-09 01:25:38.031008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.314 [2024-10-09 01:25:38.102103] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.883 [2024-10-09 01:25:38.732747] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:39.883 request: 00:05:39.883 { 00:05:39.883 "trtype": "tcp", 00:05:39.883 "method": "nvmf_get_transports", 00:05:39.883 "req_id": 1 00:05:39.883 } 00:05:39.883 Got JSON-RPC error response 00:05:39.883 response: 00:05:39.883 { 00:05:39.883 "code": -19, 00:05:39.883 "message": "No such device" 00:05:39.883 } 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.883 [2024-10-09 01:25:38.744865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.883 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.144 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.144 01:25:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:40.144 { 00:05:40.144 "subsystems": [ 00:05:40.144 { 00:05:40.144 "subsystem": "fsdev", 00:05:40.144 "config": [ 00:05:40.144 { 00:05:40.144 "method": "fsdev_set_opts", 00:05:40.144 "params": { 00:05:40.144 "fsdev_io_pool_size": 65535, 00:05:40.144 "fsdev_io_cache_size": 256 00:05:40.144 } 00:05:40.144 } 00:05:40.144 ] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "keyring", 00:05:40.144 "config": [] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "iobuf", 00:05:40.144 "config": [ 00:05:40.144 { 00:05:40.144 "method": "iobuf_set_options", 00:05:40.144 "params": { 00:05:40.144 "small_pool_count": 8192, 00:05:40.144 "large_pool_count": 1024, 00:05:40.144 "small_bufsize": 8192, 00:05:40.144 "large_bufsize": 135168 00:05:40.144 } 00:05:40.144 } 00:05:40.144 ] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "sock", 00:05:40.144 "config": [ 00:05:40.144 { 00:05:40.144 "method": "sock_set_default_impl", 00:05:40.144 "params": { 00:05:40.144 "impl_name": "posix" 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "sock_impl_set_options", 00:05:40.144 "params": { 00:05:40.144 "impl_name": "ssl", 00:05:40.144 "recv_buf_size": 4096, 00:05:40.144 "send_buf_size": 4096, 00:05:40.144 "enable_recv_pipe": true, 00:05:40.144 "enable_quickack": false, 00:05:40.144 "enable_placement_id": 0, 00:05:40.144 "enable_zerocopy_send_server": true, 00:05:40.144 "enable_zerocopy_send_client": false, 00:05:40.144 "zerocopy_threshold": 0, 00:05:40.144 "tls_version": 0, 00:05:40.144 "enable_ktls": false 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "sock_impl_set_options", 00:05:40.144 "params": { 00:05:40.144 "impl_name": "posix", 00:05:40.144 "recv_buf_size": 2097152, 00:05:40.144 "send_buf_size": 2097152, 00:05:40.144 "enable_recv_pipe": true, 00:05:40.144 "enable_quickack": false, 00:05:40.144 "enable_placement_id": 0, 00:05:40.144 "enable_zerocopy_send_server": true, 00:05:40.144 "enable_zerocopy_send_client": false, 00:05:40.144 "zerocopy_threshold": 0, 00:05:40.144 "tls_version": 0, 00:05:40.144 "enable_ktls": false 00:05:40.144 } 00:05:40.144 } 00:05:40.144 ] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "vmd", 00:05:40.144 "config": [] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "accel", 00:05:40.144 "config": [ 00:05:40.144 { 00:05:40.144 "method": "accel_set_options", 00:05:40.144 "params": { 00:05:40.144 "small_cache_size": 128, 00:05:40.144 "large_cache_size": 16, 00:05:40.144 "task_count": 2048, 00:05:40.144 "sequence_count": 2048, 00:05:40.144 "buf_count": 2048 00:05:40.144 } 00:05:40.144 } 00:05:40.144 ] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "bdev", 00:05:40.144 "config": [ 00:05:40.144 { 00:05:40.144 "method": "bdev_set_options", 00:05:40.144 "params": { 00:05:40.144 "bdev_io_pool_size": 65535, 00:05:40.144 "bdev_io_cache_size": 256, 00:05:40.144 "bdev_auto_examine": true, 00:05:40.144 "iobuf_small_cache_size": 128, 00:05:40.144 "iobuf_large_cache_size": 16 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "bdev_raid_set_options", 00:05:40.144 "params": { 00:05:40.144 "process_window_size_kb": 1024, 00:05:40.144 "process_max_bandwidth_mb_sec": 0 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "bdev_iscsi_set_options", 00:05:40.144 "params": { 00:05:40.144 "timeout_sec": 30 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "bdev_nvme_set_options", 00:05:40.144 "params": { 00:05:40.144 "action_on_timeout": "none", 00:05:40.144 "timeout_us": 0, 00:05:40.144 "timeout_admin_us": 0, 00:05:40.144 "keep_alive_timeout_ms": 10000, 00:05:40.144 "arbitration_burst": 0, 00:05:40.144 "low_priority_weight": 0, 00:05:40.144 "medium_priority_weight": 0, 00:05:40.144 "high_priority_weight": 0, 00:05:40.144 "nvme_adminq_poll_period_us": 10000, 00:05:40.144 "nvme_ioq_poll_period_us": 0, 00:05:40.144 "io_queue_requests": 0, 00:05:40.144 "delay_cmd_submit": true, 00:05:40.144 "transport_retry_count": 4, 00:05:40.144 "bdev_retry_count": 3, 00:05:40.144 "transport_ack_timeout": 0, 00:05:40.144 "ctrlr_loss_timeout_sec": 0, 00:05:40.144 "reconnect_delay_sec": 0, 00:05:40.144 "fast_io_fail_timeout_sec": 0, 00:05:40.144 "disable_auto_failback": false, 00:05:40.144 "generate_uuids": false, 00:05:40.144 "transport_tos": 0, 00:05:40.144 "nvme_error_stat": false, 00:05:40.144 "rdma_srq_size": 0, 00:05:40.144 "io_path_stat": false, 00:05:40.144 "allow_accel_sequence": false, 00:05:40.144 "rdma_max_cq_size": 0, 00:05:40.144 "rdma_cm_event_timeout_ms": 0, 00:05:40.144 "dhchap_digests": [ 00:05:40.144 "sha256", 00:05:40.144 "sha384", 00:05:40.144 "sha512" 00:05:40.144 ], 00:05:40.144 "dhchap_dhgroups": [ 00:05:40.144 "null", 00:05:40.144 "ffdhe2048", 00:05:40.144 "ffdhe3072", 00:05:40.144 "ffdhe4096", 00:05:40.144 "ffdhe6144", 00:05:40.144 "ffdhe8192" 00:05:40.144 ] 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "bdev_nvme_set_hotplug", 00:05:40.144 "params": { 00:05:40.144 "period_us": 100000, 00:05:40.144 "enable": false 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "bdev_wait_for_examine" 00:05:40.144 } 00:05:40.144 ] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "scsi", 00:05:40.144 "config": null 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "scheduler", 00:05:40.144 "config": [ 00:05:40.144 { 00:05:40.144 "method": "framework_set_scheduler", 00:05:40.144 "params": { 00:05:40.144 "name": "static" 00:05:40.144 } 00:05:40.144 } 00:05:40.144 ] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "vhost_scsi", 00:05:40.144 "config": [] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "vhost_blk", 00:05:40.144 "config": [] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "ublk", 00:05:40.144 "config": [] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "nbd", 00:05:40.144 "config": [] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "nvmf", 00:05:40.144 "config": [ 00:05:40.144 { 00:05:40.144 "method": "nvmf_set_config", 00:05:40.144 "params": { 00:05:40.144 "discovery_filter": "match_any", 00:05:40.144 "admin_cmd_passthru": { 00:05:40.144 "identify_ctrlr": false 00:05:40.144 }, 00:05:40.144 "dhchap_digests": [ 00:05:40.144 "sha256", 00:05:40.144 "sha384", 00:05:40.144 "sha512" 00:05:40.144 ], 00:05:40.144 "dhchap_dhgroups": [ 00:05:40.144 "null", 00:05:40.144 "ffdhe2048", 00:05:40.144 "ffdhe3072", 00:05:40.144 "ffdhe4096", 00:05:40.144 "ffdhe6144", 00:05:40.144 "ffdhe8192" 00:05:40.144 ] 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "nvmf_set_max_subsystems", 00:05:40.144 "params": { 00:05:40.144 "max_subsystems": 1024 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "nvmf_set_crdt", 00:05:40.144 "params": { 00:05:40.144 "crdt1": 0, 00:05:40.144 "crdt2": 0, 00:05:40.144 "crdt3": 0 00:05:40.144 } 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "method": "nvmf_create_transport", 00:05:40.144 "params": { 00:05:40.144 "trtype": "TCP", 00:05:40.144 "max_queue_depth": 128, 00:05:40.144 "max_io_qpairs_per_ctrlr": 127, 00:05:40.144 "in_capsule_data_size": 4096, 00:05:40.144 "max_io_size": 131072, 00:05:40.144 "io_unit_size": 131072, 00:05:40.144 "max_aq_depth": 128, 00:05:40.144 "num_shared_buffers": 511, 00:05:40.144 "buf_cache_size": 4294967295, 00:05:40.144 "dif_insert_or_strip": false, 00:05:40.144 "zcopy": false, 00:05:40.144 "c2h_success": true, 00:05:40.144 "sock_priority": 0, 00:05:40.144 "abort_timeout_sec": 1, 00:05:40.144 "ack_timeout": 0, 00:05:40.144 "data_wr_pool_size": 0 00:05:40.144 } 00:05:40.144 } 00:05:40.144 ] 00:05:40.144 }, 00:05:40.144 { 00:05:40.144 "subsystem": "iscsi", 00:05:40.144 "config": [ 00:05:40.144 { 00:05:40.144 "method": "iscsi_set_options", 00:05:40.144 "params": { 00:05:40.144 "node_base": "iqn.2016-06.io.spdk", 00:05:40.144 "max_sessions": 128, 00:05:40.144 "max_connections_per_session": 2, 00:05:40.144 "max_queue_depth": 64, 00:05:40.144 "default_time2wait": 2, 00:05:40.144 "default_time2retain": 20, 00:05:40.144 "first_burst_length": 8192, 00:05:40.144 "immediate_data": true, 00:05:40.144 "allow_duplicated_isid": false, 00:05:40.144 "error_recovery_level": 0, 00:05:40.145 "nop_timeout": 60, 00:05:40.145 "nop_in_interval": 30, 00:05:40.145 "disable_chap": false, 00:05:40.145 "require_chap": false, 00:05:40.145 "mutual_chap": false, 00:05:40.145 "chap_group": 0, 00:05:40.145 "max_large_datain_per_connection": 64, 00:05:40.145 "max_r2t_per_connection": 4, 00:05:40.145 "pdu_pool_size": 36864, 00:05:40.145 "immediate_data_pool_size": 16384, 00:05:40.145 "data_out_pool_size": 2048 00:05:40.145 } 00:05:40.145 } 00:05:40.145 ] 00:05:40.145 } 00:05:40.145 ] 00:05:40.145 } 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70385 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70385 ']' 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70385 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70385 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.145 killing process with pid 70385 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70385' 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70385 00:05:40.145 01:25:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70385 00:05:41.084 01:25:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:41.084 01:25:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70413 00:05:41.084 01:25:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70413 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70413 ']' 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70413 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70413 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70413' 00:05:46.389 killing process with pid 70413 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70413 00:05:46.389 01:25:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70413 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:46.650 00:05:46.650 real 0m7.564s 00:05:46.650 user 0m6.795s 00:05:46.650 sys 0m1.090s 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.650 ************************************ 00:05:46.650 END TEST skip_rpc_with_json 00:05:46.650 ************************************ 00:05:46.650 01:25:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.650 01:25:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.650 01:25:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.650 01:25:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.650 ************************************ 00:05:46.650 START TEST skip_rpc_with_delay 00:05:46.650 ************************************ 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:46.650 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.650 [2024-10-09 01:25:45.514807] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:46.650 [2024-10-09 01:25:45.514952] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:46.911 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:46.911 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.911 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.911 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.911 00:05:46.911 real 0m0.172s 00:05:46.911 user 0m0.089s 00:05:46.911 sys 0m0.082s 00:05:46.911 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.911 01:25:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:46.911 ************************************ 00:05:46.911 END TEST skip_rpc_with_delay 00:05:46.911 ************************************ 00:05:46.911 01:25:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:46.911 01:25:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:46.911 01:25:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:46.911 01:25:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.911 01:25:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.911 01:25:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.911 ************************************ 00:05:46.911 START TEST exit_on_failed_rpc_init 00:05:46.911 ************************************ 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70525 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70525 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70525 ']' 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.911 01:25:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.911 [2024-10-09 01:25:45.756367] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:46.911 [2024-10-09 01:25:45.756487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70525 ] 00:05:47.181 [2024-10-09 01:25:45.888564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:47.181 [2024-10-09 01:25:45.916399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.181 [2024-10-09 01:25:45.985611] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.752 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.752 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:47.752 01:25:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.752 01:25:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.752 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:47.752 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:47.753 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:48.012 [2024-10-09 01:25:46.662418] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:48.012 [2024-10-09 01:25:46.662563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70543 ] 00:05:48.012 [2024-10-09 01:25:46.795207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.012 [2024-10-09 01:25:46.824226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.012 [2024-10-09 01:25:46.869157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.012 [2024-10-09 01:25:46.869240] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:48.012 [2024-10-09 01:25:46.869255] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:48.012 [2024-10-09 01:25:46.869267] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70525 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70525 ']' 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70525 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:48.271 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.271 01:25:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70525 00:05:48.271 01:25:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.271 01:25:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.271 killing process with pid 70525 00:05:48.271 01:25:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70525' 00:05:48.271 01:25:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70525 00:05:48.272 01:25:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70525 00:05:48.841 00:05:48.841 real 0m2.025s 00:05:48.841 user 0m1.975s 00:05:48.841 sys 0m0.691s 00:05:48.841 01:25:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.841 01:25:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.841 ************************************ 00:05:48.841 END TEST exit_on_failed_rpc_init 00:05:48.841 ************************************ 00:05:49.102 01:25:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:49.102 00:05:49.102 real 0m15.999s 00:05:49.102 user 0m14.369s 00:05:49.102 sys 0m2.534s 00:05:49.102 01:25:47 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.102 01:25:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.102 ************************************ 00:05:49.102 END TEST skip_rpc 00:05:49.102 ************************************ 00:05:49.102 01:25:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.102 01:25:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.102 01:25:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.102 01:25:47 -- common/autotest_common.sh@10 -- # set +x 00:05:49.102 ************************************ 00:05:49.102 START TEST rpc_client 00:05:49.102 ************************************ 00:05:49.102 01:25:47 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:49.102 * Looking for test storage... 00:05:49.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:49.102 01:25:47 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.102 01:25:47 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.102 01:25:47 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.363 01:25:48 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.363 01:25:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.363 01:25:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.363 01:25:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.363 01:25:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.363 01:25:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.363 01:25:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.363 01:25:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.364 01:25:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:49.364 01:25:48 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.364 01:25:48 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.364 --rc genhtml_branch_coverage=1 00:05:49.364 --rc genhtml_function_coverage=1 00:05:49.364 --rc genhtml_legend=1 00:05:49.364 --rc geninfo_all_blocks=1 00:05:49.364 --rc geninfo_unexecuted_blocks=1 00:05:49.364 00:05:49.364 ' 00:05:49.364 01:25:48 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.364 --rc genhtml_branch_coverage=1 00:05:49.364 --rc genhtml_function_coverage=1 00:05:49.364 --rc genhtml_legend=1 00:05:49.364 --rc geninfo_all_blocks=1 00:05:49.364 --rc geninfo_unexecuted_blocks=1 00:05:49.364 00:05:49.364 ' 00:05:49.364 01:25:48 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.364 --rc genhtml_branch_coverage=1 00:05:49.364 --rc genhtml_function_coverage=1 00:05:49.364 --rc genhtml_legend=1 00:05:49.364 --rc geninfo_all_blocks=1 00:05:49.364 --rc geninfo_unexecuted_blocks=1 00:05:49.364 00:05:49.364 ' 00:05:49.364 01:25:48 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.364 --rc genhtml_branch_coverage=1 00:05:49.364 --rc genhtml_function_coverage=1 00:05:49.364 --rc genhtml_legend=1 00:05:49.364 --rc geninfo_all_blocks=1 00:05:49.364 --rc geninfo_unexecuted_blocks=1 00:05:49.364 00:05:49.364 ' 00:05:49.364 01:25:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:49.364 OK 00:05:49.364 01:25:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:49.364 00:05:49.364 real 0m0.300s 00:05:49.364 user 0m0.172s 00:05:49.364 sys 0m0.145s 00:05:49.364 01:25:48 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.364 01:25:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:49.364 ************************************ 00:05:49.364 END TEST rpc_client 00:05:49.364 ************************************ 00:05:49.364 01:25:48 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.364 01:25:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.364 01:25:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.364 01:25:48 -- common/autotest_common.sh@10 -- # set +x 00:05:49.364 ************************************ 00:05:49.364 START TEST json_config 00:05:49.364 ************************************ 00:05:49.364 01:25:48 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:49.625 01:25:48 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.625 01:25:48 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.625 01:25:48 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.625 01:25:48 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.625 01:25:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.625 01:25:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.625 01:25:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.625 01:25:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.625 01:25:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.625 01:25:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.625 01:25:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.625 01:25:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.625 01:25:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.625 01:25:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.625 01:25:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.625 01:25:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:49.625 01:25:48 json_config -- scripts/common.sh@345 -- # : 1 00:05:49.625 01:25:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.625 01:25:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.625 01:25:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:49.625 01:25:48 json_config -- scripts/common.sh@353 -- # local d=1 00:05:49.625 01:25:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.625 01:25:48 json_config -- scripts/common.sh@355 -- # echo 1 00:05:49.625 01:25:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.625 01:25:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:49.625 01:25:48 json_config -- scripts/common.sh@353 -- # local d=2 00:05:49.625 01:25:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.625 01:25:48 json_config -- scripts/common.sh@355 -- # echo 2 00:05:49.625 01:25:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.625 01:25:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.625 01:25:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.625 01:25:48 json_config -- scripts/common.sh@368 -- # return 0 00:05:49.625 01:25:48 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.625 01:25:48 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.625 --rc genhtml_branch_coverage=1 00:05:49.625 --rc genhtml_function_coverage=1 00:05:49.625 --rc genhtml_legend=1 00:05:49.625 --rc geninfo_all_blocks=1 00:05:49.625 --rc geninfo_unexecuted_blocks=1 00:05:49.625 00:05:49.625 ' 00:05:49.625 01:25:48 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.625 --rc genhtml_branch_coverage=1 00:05:49.625 --rc genhtml_function_coverage=1 00:05:49.625 --rc genhtml_legend=1 00:05:49.625 --rc geninfo_all_blocks=1 00:05:49.625 --rc geninfo_unexecuted_blocks=1 00:05:49.625 00:05:49.625 ' 00:05:49.625 01:25:48 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.625 --rc genhtml_branch_coverage=1 00:05:49.625 --rc genhtml_function_coverage=1 00:05:49.625 --rc genhtml_legend=1 00:05:49.625 --rc geninfo_all_blocks=1 00:05:49.625 --rc geninfo_unexecuted_blocks=1 00:05:49.625 00:05:49.626 ' 00:05:49.626 01:25:48 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.626 --rc genhtml_branch_coverage=1 00:05:49.626 --rc genhtml_function_coverage=1 00:05:49.626 --rc genhtml_legend=1 00:05:49.626 --rc geninfo_all_blocks=1 00:05:49.626 --rc geninfo_unexecuted_blocks=1 00:05:49.626 00:05:49.626 ' 00:05:49.626 01:25:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cf31f245-87c1-4e89-8524-8663cf6c0ef5 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cf31f245-87c1-4e89-8524-8663cf6c0ef5 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.626 01:25:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.626 01:25:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.626 01:25:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.626 01:25:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.626 01:25:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.626 01:25:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.626 01:25:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.626 01:25:48 json_config -- paths/export.sh@5 -- # export PATH 00:05:49.626 01:25:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@51 -- # : 0 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.626 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.626 01:25:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.626 01:25:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:49.626 01:25:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:49.626 01:25:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:49.626 01:25:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:49.626 01:25:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:49.626 WARNING: No tests are enabled so not running JSON configuration tests 00:05:49.626 01:25:48 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:49.626 01:25:48 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:49.626 00:05:49.626 real 0m0.223s 00:05:49.626 user 0m0.145s 00:05:49.626 sys 0m0.086s 00:05:49.626 01:25:48 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.626 01:25:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.626 ************************************ 00:05:49.626 END TEST json_config 00:05:49.626 ************************************ 00:05:49.626 01:25:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.626 01:25:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.626 01:25:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.626 01:25:48 -- common/autotest_common.sh@10 -- # set +x 00:05:49.626 ************************************ 00:05:49.626 START TEST json_config_extra_key 00:05:49.626 ************************************ 00:05:49.626 01:25:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.888 --rc genhtml_branch_coverage=1 00:05:49.888 --rc genhtml_function_coverage=1 00:05:49.888 --rc genhtml_legend=1 00:05:49.888 --rc geninfo_all_blocks=1 00:05:49.888 --rc geninfo_unexecuted_blocks=1 00:05:49.888 00:05:49.888 ' 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.888 --rc genhtml_branch_coverage=1 00:05:49.888 --rc genhtml_function_coverage=1 00:05:49.888 --rc genhtml_legend=1 00:05:49.888 --rc geninfo_all_blocks=1 00:05:49.888 --rc geninfo_unexecuted_blocks=1 00:05:49.888 00:05:49.888 ' 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.888 --rc genhtml_branch_coverage=1 00:05:49.888 --rc genhtml_function_coverage=1 00:05:49.888 --rc genhtml_legend=1 00:05:49.888 --rc geninfo_all_blocks=1 00:05:49.888 --rc geninfo_unexecuted_blocks=1 00:05:49.888 00:05:49.888 ' 00:05:49.888 01:25:48 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.888 --rc genhtml_branch_coverage=1 00:05:49.888 --rc genhtml_function_coverage=1 00:05:49.888 --rc genhtml_legend=1 00:05:49.888 --rc geninfo_all_blocks=1 00:05:49.888 --rc geninfo_unexecuted_blocks=1 00:05:49.888 00:05:49.888 ' 00:05:49.888 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cf31f245-87c1-4e89-8524-8663cf6c0ef5 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cf31f245-87c1-4e89-8524-8663cf6c0ef5 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.888 01:25:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.888 01:25:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.888 01:25:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.888 01:25:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.889 01:25:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.889 01:25:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:49.889 01:25:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:49.889 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:49.889 01:25:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.889 INFO: launching applications... 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:49.889 01:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70731 00:05:49.889 Waiting for target to run... 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70731 /var/tmp/spdk_tgt.sock 00:05:49.889 01:25:48 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70731 ']' 00:05:49.889 01:25:48 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.889 01:25:48 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.889 01:25:48 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.889 01:25:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:49.889 01:25:48 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.889 01:25:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.889 [2024-10-09 01:25:48.758076] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:49.889 [2024-10-09 01:25:48.758195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70731 ] 00:05:50.459 [2024-10-09 01:25:49.098891] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:50.459 [2024-10-09 01:25:49.129257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.459 [2024-10-09 01:25:49.171133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.719 01:25:49 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.719 01:25:49 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:50.719 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:50.719 INFO: shutting down applications... 00:05:50.719 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:50.719 01:25:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70731 ]] 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70731 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70731 00:05:50.719 01:25:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.288 01:25:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.288 01:25:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.288 01:25:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70731 00:05:51.288 01:25:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.860 01:25:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.860 01:25:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.860 01:25:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70731 00:05:51.860 01:25:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:51.860 01:25:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:51.860 01:25:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:51.860 SPDK target shutdown done 00:05:51.860 01:25:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:51.860 Success 00:05:51.860 01:25:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:51.860 00:05:51.860 real 0m2.108s 00:05:51.860 user 0m1.592s 00:05:51.860 sys 0m0.473s 00:05:51.860 01:25:50 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.860 01:25:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.860 ************************************ 00:05:51.860 END TEST json_config_extra_key 00:05:51.860 ************************************ 00:05:51.860 01:25:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:51.860 01:25:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.860 01:25:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.860 01:25:50 -- common/autotest_common.sh@10 -- # set +x 00:05:51.860 ************************************ 00:05:51.860 START TEST alias_rpc 00:05:51.860 ************************************ 00:05:51.860 01:25:50 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.121 * Looking for test storage... 00:05:52.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.121 01:25:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:52.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.121 --rc genhtml_branch_coverage=1 00:05:52.121 --rc genhtml_function_coverage=1 00:05:52.121 --rc genhtml_legend=1 00:05:52.121 --rc geninfo_all_blocks=1 00:05:52.121 --rc geninfo_unexecuted_blocks=1 00:05:52.121 00:05:52.121 ' 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:52.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.121 --rc genhtml_branch_coverage=1 00:05:52.121 --rc genhtml_function_coverage=1 00:05:52.121 --rc genhtml_legend=1 00:05:52.121 --rc geninfo_all_blocks=1 00:05:52.121 --rc geninfo_unexecuted_blocks=1 00:05:52.121 00:05:52.121 ' 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:52.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.121 --rc genhtml_branch_coverage=1 00:05:52.121 --rc genhtml_function_coverage=1 00:05:52.121 --rc genhtml_legend=1 00:05:52.121 --rc geninfo_all_blocks=1 00:05:52.121 --rc geninfo_unexecuted_blocks=1 00:05:52.121 00:05:52.121 ' 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:52.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.121 --rc genhtml_branch_coverage=1 00:05:52.121 --rc genhtml_function_coverage=1 00:05:52.121 --rc genhtml_legend=1 00:05:52.121 --rc geninfo_all_blocks=1 00:05:52.121 --rc geninfo_unexecuted_blocks=1 00:05:52.121 00:05:52.121 ' 00:05:52.121 01:25:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.121 01:25:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70811 00:05:52.121 01:25:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.121 01:25:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70811 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70811 ']' 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.121 01:25:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.121 [2024-10-09 01:25:50.947262] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:52.121 [2024-10-09 01:25:50.947371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70811 ] 00:05:52.380 [2024-10-09 01:25:51.078734] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.380 [2024-10-09 01:25:51.108571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.380 [2024-10-09 01:25:51.179764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.948 01:25:51 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.948 01:25:51 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:52.948 01:25:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:53.207 01:25:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70811 00:05:53.207 01:25:51 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70811 ']' 00:05:53.207 01:25:51 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70811 00:05:53.208 01:25:51 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:53.208 01:25:51 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.208 01:25:51 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70811 00:05:53.208 01:25:52 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.208 01:25:52 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.208 killing process with pid 70811 00:05:53.208 01:25:52 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70811' 00:05:53.208 01:25:52 alias_rpc -- common/autotest_common.sh@969 -- # kill 70811 00:05:53.208 01:25:52 alias_rpc -- common/autotest_common.sh@974 -- # wait 70811 00:05:54.147 00:05:54.147 real 0m2.039s 00:05:54.147 user 0m1.909s 00:05:54.147 sys 0m0.656s 00:05:54.147 01:25:52 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.147 01:25:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 ************************************ 00:05:54.147 END TEST alias_rpc 00:05:54.147 ************************************ 00:05:54.147 01:25:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:54.147 01:25:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:54.147 01:25:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.147 01:25:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.147 01:25:52 -- common/autotest_common.sh@10 -- # set +x 00:05:54.147 ************************************ 00:05:54.147 START TEST spdkcli_tcp 00:05:54.147 ************************************ 00:05:54.147 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:54.147 * Looking for test storage... 00:05:54.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:54.147 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.147 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.147 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.147 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.147 01:25:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.148 01:25:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.148 --rc genhtml_branch_coverage=1 00:05:54.148 --rc genhtml_function_coverage=1 00:05:54.148 --rc genhtml_legend=1 00:05:54.148 --rc geninfo_all_blocks=1 00:05:54.148 --rc geninfo_unexecuted_blocks=1 00:05:54.148 00:05:54.148 ' 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.148 --rc genhtml_branch_coverage=1 00:05:54.148 --rc genhtml_function_coverage=1 00:05:54.148 --rc genhtml_legend=1 00:05:54.148 --rc geninfo_all_blocks=1 00:05:54.148 --rc geninfo_unexecuted_blocks=1 00:05:54.148 00:05:54.148 ' 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.148 --rc genhtml_branch_coverage=1 00:05:54.148 --rc genhtml_function_coverage=1 00:05:54.148 --rc genhtml_legend=1 00:05:54.148 --rc geninfo_all_blocks=1 00:05:54.148 --rc geninfo_unexecuted_blocks=1 00:05:54.148 00:05:54.148 ' 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.148 --rc genhtml_branch_coverage=1 00:05:54.148 --rc genhtml_function_coverage=1 00:05:54.148 --rc genhtml_legend=1 00:05:54.148 --rc geninfo_all_blocks=1 00:05:54.148 --rc geninfo_unexecuted_blocks=1 00:05:54.148 00:05:54.148 ' 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70896 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:54.148 01:25:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70896 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70896 ']' 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.148 01:25:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.407 [2024-10-09 01:25:53.074371] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:54.407 [2024-10-09 01:25:53.074521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70896 ] 00:05:54.407 [2024-10-09 01:25:53.211201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.407 [2024-10-09 01:25:53.240786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.666 [2024-10-09 01:25:53.313139] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.666 [2024-10-09 01:25:53.313235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.235 01:25:53 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.235 01:25:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:55.235 01:25:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70913 00:05:55.235 01:25:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:55.235 01:25:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:55.235 [ 00:05:55.235 "bdev_malloc_delete", 00:05:55.235 "bdev_malloc_create", 00:05:55.235 "bdev_null_resize", 00:05:55.235 "bdev_null_delete", 00:05:55.235 "bdev_null_create", 00:05:55.235 "bdev_nvme_cuse_unregister", 00:05:55.235 "bdev_nvme_cuse_register", 00:05:55.235 "bdev_opal_new_user", 00:05:55.235 "bdev_opal_set_lock_state", 00:05:55.235 "bdev_opal_delete", 00:05:55.235 "bdev_opal_get_info", 00:05:55.235 "bdev_opal_create", 00:05:55.235 "bdev_nvme_opal_revert", 00:05:55.235 "bdev_nvme_opal_init", 00:05:55.235 "bdev_nvme_send_cmd", 00:05:55.235 "bdev_nvme_set_keys", 00:05:55.235 "bdev_nvme_get_path_iostat", 00:05:55.235 "bdev_nvme_get_mdns_discovery_info", 00:05:55.235 "bdev_nvme_stop_mdns_discovery", 00:05:55.235 "bdev_nvme_start_mdns_discovery", 00:05:55.235 "bdev_nvme_set_multipath_policy", 00:05:55.235 "bdev_nvme_set_preferred_path", 00:05:55.235 "bdev_nvme_get_io_paths", 00:05:55.235 "bdev_nvme_remove_error_injection", 00:05:55.235 "bdev_nvme_add_error_injection", 00:05:55.235 "bdev_nvme_get_discovery_info", 00:05:55.235 "bdev_nvme_stop_discovery", 00:05:55.235 "bdev_nvme_start_discovery", 00:05:55.235 "bdev_nvme_get_controller_health_info", 00:05:55.235 "bdev_nvme_disable_controller", 00:05:55.235 "bdev_nvme_enable_controller", 00:05:55.235 "bdev_nvme_reset_controller", 00:05:55.235 "bdev_nvme_get_transport_statistics", 00:05:55.235 "bdev_nvme_apply_firmware", 00:05:55.235 "bdev_nvme_detach_controller", 00:05:55.235 "bdev_nvme_get_controllers", 00:05:55.235 "bdev_nvme_attach_controller", 00:05:55.235 "bdev_nvme_set_hotplug", 00:05:55.235 "bdev_nvme_set_options", 00:05:55.235 "bdev_passthru_delete", 00:05:55.235 "bdev_passthru_create", 00:05:55.235 "bdev_lvol_set_parent_bdev", 00:05:55.235 "bdev_lvol_set_parent", 00:05:55.235 "bdev_lvol_check_shallow_copy", 00:05:55.235 "bdev_lvol_start_shallow_copy", 00:05:55.235 "bdev_lvol_grow_lvstore", 00:05:55.235 "bdev_lvol_get_lvols", 00:05:55.235 "bdev_lvol_get_lvstores", 00:05:55.235 "bdev_lvol_delete", 00:05:55.235 "bdev_lvol_set_read_only", 00:05:55.235 "bdev_lvol_resize", 00:05:55.235 "bdev_lvol_decouple_parent", 00:05:55.235 "bdev_lvol_inflate", 00:05:55.235 "bdev_lvol_rename", 00:05:55.235 "bdev_lvol_clone_bdev", 00:05:55.235 "bdev_lvol_clone", 00:05:55.235 "bdev_lvol_snapshot", 00:05:55.235 "bdev_lvol_create", 00:05:55.235 "bdev_lvol_delete_lvstore", 00:05:55.235 "bdev_lvol_rename_lvstore", 00:05:55.235 "bdev_lvol_create_lvstore", 00:05:55.235 "bdev_raid_set_options", 00:05:55.235 "bdev_raid_remove_base_bdev", 00:05:55.235 "bdev_raid_add_base_bdev", 00:05:55.235 "bdev_raid_delete", 00:05:55.235 "bdev_raid_create", 00:05:55.235 "bdev_raid_get_bdevs", 00:05:55.235 "bdev_error_inject_error", 00:05:55.235 "bdev_error_delete", 00:05:55.235 "bdev_error_create", 00:05:55.235 "bdev_split_delete", 00:05:55.235 "bdev_split_create", 00:05:55.235 "bdev_delay_delete", 00:05:55.235 "bdev_delay_create", 00:05:55.235 "bdev_delay_update_latency", 00:05:55.235 "bdev_zone_block_delete", 00:05:55.235 "bdev_zone_block_create", 00:05:55.235 "blobfs_create", 00:05:55.235 "blobfs_detect", 00:05:55.235 "blobfs_set_cache_size", 00:05:55.235 "bdev_aio_delete", 00:05:55.235 "bdev_aio_rescan", 00:05:55.235 "bdev_aio_create", 00:05:55.235 "bdev_ftl_set_property", 00:05:55.235 "bdev_ftl_get_properties", 00:05:55.235 "bdev_ftl_get_stats", 00:05:55.235 "bdev_ftl_unmap", 00:05:55.235 "bdev_ftl_unload", 00:05:55.235 "bdev_ftl_delete", 00:05:55.235 "bdev_ftl_load", 00:05:55.235 "bdev_ftl_create", 00:05:55.235 "bdev_virtio_attach_controller", 00:05:55.235 "bdev_virtio_scsi_get_devices", 00:05:55.235 "bdev_virtio_detach_controller", 00:05:55.235 "bdev_virtio_blk_set_hotplug", 00:05:55.235 "bdev_iscsi_delete", 00:05:55.235 "bdev_iscsi_create", 00:05:55.235 "bdev_iscsi_set_options", 00:05:55.235 "accel_error_inject_error", 00:05:55.235 "ioat_scan_accel_module", 00:05:55.235 "dsa_scan_accel_module", 00:05:55.235 "iaa_scan_accel_module", 00:05:55.235 "keyring_file_remove_key", 00:05:55.235 "keyring_file_add_key", 00:05:55.235 "keyring_linux_set_options", 00:05:55.235 "fsdev_aio_delete", 00:05:55.235 "fsdev_aio_create", 00:05:55.235 "iscsi_get_histogram", 00:05:55.235 "iscsi_enable_histogram", 00:05:55.235 "iscsi_set_options", 00:05:55.235 "iscsi_get_auth_groups", 00:05:55.235 "iscsi_auth_group_remove_secret", 00:05:55.235 "iscsi_auth_group_add_secret", 00:05:55.235 "iscsi_delete_auth_group", 00:05:55.235 "iscsi_create_auth_group", 00:05:55.235 "iscsi_set_discovery_auth", 00:05:55.235 "iscsi_get_options", 00:05:55.235 "iscsi_target_node_request_logout", 00:05:55.235 "iscsi_target_node_set_redirect", 00:05:55.235 "iscsi_target_node_set_auth", 00:05:55.235 "iscsi_target_node_add_lun", 00:05:55.235 "iscsi_get_stats", 00:05:55.235 "iscsi_get_connections", 00:05:55.235 "iscsi_portal_group_set_auth", 00:05:55.235 "iscsi_start_portal_group", 00:05:55.235 "iscsi_delete_portal_group", 00:05:55.235 "iscsi_create_portal_group", 00:05:55.236 "iscsi_get_portal_groups", 00:05:55.236 "iscsi_delete_target_node", 00:05:55.236 "iscsi_target_node_remove_pg_ig_maps", 00:05:55.236 "iscsi_target_node_add_pg_ig_maps", 00:05:55.236 "iscsi_create_target_node", 00:05:55.236 "iscsi_get_target_nodes", 00:05:55.236 "iscsi_delete_initiator_group", 00:05:55.236 "iscsi_initiator_group_remove_initiators", 00:05:55.236 "iscsi_initiator_group_add_initiators", 00:05:55.236 "iscsi_create_initiator_group", 00:05:55.236 "iscsi_get_initiator_groups", 00:05:55.236 "nvmf_set_crdt", 00:05:55.236 "nvmf_set_config", 00:05:55.236 "nvmf_set_max_subsystems", 00:05:55.236 "nvmf_stop_mdns_prr", 00:05:55.236 "nvmf_publish_mdns_prr", 00:05:55.236 "nvmf_subsystem_get_listeners", 00:05:55.236 "nvmf_subsystem_get_qpairs", 00:05:55.236 "nvmf_subsystem_get_controllers", 00:05:55.236 "nvmf_get_stats", 00:05:55.236 "nvmf_get_transports", 00:05:55.236 "nvmf_create_transport", 00:05:55.236 "nvmf_get_targets", 00:05:55.236 "nvmf_delete_target", 00:05:55.236 "nvmf_create_target", 00:05:55.236 "nvmf_subsystem_allow_any_host", 00:05:55.236 "nvmf_subsystem_set_keys", 00:05:55.236 "nvmf_subsystem_remove_host", 00:05:55.236 "nvmf_subsystem_add_host", 00:05:55.236 "nvmf_ns_remove_host", 00:05:55.236 "nvmf_ns_add_host", 00:05:55.236 "nvmf_subsystem_remove_ns", 00:05:55.236 "nvmf_subsystem_set_ns_ana_group", 00:05:55.236 "nvmf_subsystem_add_ns", 00:05:55.236 "nvmf_subsystem_listener_set_ana_state", 00:05:55.236 "nvmf_discovery_get_referrals", 00:05:55.236 "nvmf_discovery_remove_referral", 00:05:55.236 "nvmf_discovery_add_referral", 00:05:55.236 "nvmf_subsystem_remove_listener", 00:05:55.236 "nvmf_subsystem_add_listener", 00:05:55.236 "nvmf_delete_subsystem", 00:05:55.236 "nvmf_create_subsystem", 00:05:55.236 "nvmf_get_subsystems", 00:05:55.236 "env_dpdk_get_mem_stats", 00:05:55.236 "nbd_get_disks", 00:05:55.236 "nbd_stop_disk", 00:05:55.236 "nbd_start_disk", 00:05:55.236 "ublk_recover_disk", 00:05:55.236 "ublk_get_disks", 00:05:55.236 "ublk_stop_disk", 00:05:55.236 "ublk_start_disk", 00:05:55.236 "ublk_destroy_target", 00:05:55.236 "ublk_create_target", 00:05:55.236 "virtio_blk_create_transport", 00:05:55.236 "virtio_blk_get_transports", 00:05:55.236 "vhost_controller_set_coalescing", 00:05:55.236 "vhost_get_controllers", 00:05:55.236 "vhost_delete_controller", 00:05:55.236 "vhost_create_blk_controller", 00:05:55.236 "vhost_scsi_controller_remove_target", 00:05:55.236 "vhost_scsi_controller_add_target", 00:05:55.236 "vhost_start_scsi_controller", 00:05:55.236 "vhost_create_scsi_controller", 00:05:55.236 "thread_set_cpumask", 00:05:55.236 "scheduler_set_options", 00:05:55.236 "framework_get_governor", 00:05:55.236 "framework_get_scheduler", 00:05:55.236 "framework_set_scheduler", 00:05:55.236 "framework_get_reactors", 00:05:55.236 "thread_get_io_channels", 00:05:55.236 "thread_get_pollers", 00:05:55.236 "thread_get_stats", 00:05:55.236 "framework_monitor_context_switch", 00:05:55.236 "spdk_kill_instance", 00:05:55.236 "log_enable_timestamps", 00:05:55.236 "log_get_flags", 00:05:55.236 "log_clear_flag", 00:05:55.236 "log_set_flag", 00:05:55.236 "log_get_level", 00:05:55.236 "log_set_level", 00:05:55.236 "log_get_print_level", 00:05:55.236 "log_set_print_level", 00:05:55.236 "framework_enable_cpumask_locks", 00:05:55.236 "framework_disable_cpumask_locks", 00:05:55.236 "framework_wait_init", 00:05:55.236 "framework_start_init", 00:05:55.236 "scsi_get_devices", 00:05:55.236 "bdev_get_histogram", 00:05:55.236 "bdev_enable_histogram", 00:05:55.236 "bdev_set_qos_limit", 00:05:55.236 "bdev_set_qd_sampling_period", 00:05:55.236 "bdev_get_bdevs", 00:05:55.236 "bdev_reset_iostat", 00:05:55.236 "bdev_get_iostat", 00:05:55.236 "bdev_examine", 00:05:55.236 "bdev_wait_for_examine", 00:05:55.236 "bdev_set_options", 00:05:55.236 "accel_get_stats", 00:05:55.236 "accel_set_options", 00:05:55.236 "accel_set_driver", 00:05:55.236 "accel_crypto_key_destroy", 00:05:55.236 "accel_crypto_keys_get", 00:05:55.236 "accel_crypto_key_create", 00:05:55.236 "accel_assign_opc", 00:05:55.236 "accel_get_module_info", 00:05:55.236 "accel_get_opc_assignments", 00:05:55.236 "vmd_rescan", 00:05:55.236 "vmd_remove_device", 00:05:55.236 "vmd_enable", 00:05:55.236 "sock_get_default_impl", 00:05:55.236 "sock_set_default_impl", 00:05:55.236 "sock_impl_set_options", 00:05:55.236 "sock_impl_get_options", 00:05:55.236 "iobuf_get_stats", 00:05:55.236 "iobuf_set_options", 00:05:55.236 "keyring_get_keys", 00:05:55.236 "framework_get_pci_devices", 00:05:55.236 "framework_get_config", 00:05:55.236 "framework_get_subsystems", 00:05:55.236 "fsdev_set_opts", 00:05:55.236 "fsdev_get_opts", 00:05:55.236 "trace_get_info", 00:05:55.236 "trace_get_tpoint_group_mask", 00:05:55.236 "trace_disable_tpoint_group", 00:05:55.236 "trace_enable_tpoint_group", 00:05:55.236 "trace_clear_tpoint_mask", 00:05:55.236 "trace_set_tpoint_mask", 00:05:55.236 "notify_get_notifications", 00:05:55.236 "notify_get_types", 00:05:55.236 "spdk_get_version", 00:05:55.236 "rpc_get_methods" 00:05:55.236 ] 00:05:55.236 01:25:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:55.236 01:25:54 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:55.236 01:25:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:55.496 01:25:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:55.496 01:25:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70896 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70896 ']' 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70896 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70896 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.496 killing process with pid 70896 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70896' 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70896 00:05:55.496 01:25:54 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70896 00:05:56.066 00:05:56.066 real 0m2.090s 00:05:56.066 user 0m3.316s 00:05:56.066 sys 0m0.729s 00:05:56.066 01:25:54 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.066 01:25:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 ************************************ 00:05:56.066 END TEST spdkcli_tcp 00:05:56.066 ************************************ 00:05:56.066 01:25:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.066 01:25:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.066 01:25:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.066 01:25:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 ************************************ 00:05:56.066 START TEST dpdk_mem_utility 00:05:56.066 ************************************ 00:05:56.066 01:25:54 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.342 * Looking for test storage... 00:05:56.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.342 01:25:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.342 --rc genhtml_branch_coverage=1 00:05:56.342 --rc genhtml_function_coverage=1 00:05:56.342 --rc genhtml_legend=1 00:05:56.342 --rc geninfo_all_blocks=1 00:05:56.342 --rc geninfo_unexecuted_blocks=1 00:05:56.342 00:05:56.342 ' 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.342 --rc genhtml_branch_coverage=1 00:05:56.342 --rc genhtml_function_coverage=1 00:05:56.342 --rc genhtml_legend=1 00:05:56.342 --rc geninfo_all_blocks=1 00:05:56.342 --rc geninfo_unexecuted_blocks=1 00:05:56.342 00:05:56.342 ' 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.342 --rc genhtml_branch_coverage=1 00:05:56.342 --rc genhtml_function_coverage=1 00:05:56.342 --rc genhtml_legend=1 00:05:56.342 --rc geninfo_all_blocks=1 00:05:56.342 --rc geninfo_unexecuted_blocks=1 00:05:56.342 00:05:56.342 ' 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.342 --rc genhtml_branch_coverage=1 00:05:56.342 --rc genhtml_function_coverage=1 00:05:56.342 --rc genhtml_legend=1 00:05:56.342 --rc geninfo_all_blocks=1 00:05:56.342 --rc geninfo_unexecuted_blocks=1 00:05:56.342 00:05:56.342 ' 00:05:56.342 01:25:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:56.342 01:25:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71002 00:05:56.342 01:25:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.342 01:25:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71002 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71002 ']' 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.342 01:25:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:56.342 [2024-10-09 01:25:55.217364] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:56.342 [2024-10-09 01:25:55.217497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71002 ] 00:05:56.647 [2024-10-09 01:25:55.349607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.647 [2024-10-09 01:25:55.379297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.647 [2024-10-09 01:25:55.448351] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.277 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.277 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:57.277 01:25:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:57.277 01:25:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:57.277 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.277 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:57.277 { 00:05:57.277 "filename": "/tmp/spdk_mem_dump.txt" 00:05:57.277 } 00:05:57.277 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.277 01:25:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.277 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:57.277 1 heaps totaling size 860.000000 MiB 00:05:57.277 size: 860.000000 MiB heap id: 0 00:05:57.277 end heaps---------- 00:05:57.277 9 mempools totaling size 642.649841 MiB 00:05:57.277 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:57.277 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:57.277 size: 92.545471 MiB name: bdev_io_71002 00:05:57.277 size: 51.011292 MiB name: evtpool_71002 00:05:57.277 size: 50.003479 MiB name: msgpool_71002 00:05:57.277 size: 36.509338 MiB name: fsdev_io_71002 00:05:57.277 size: 21.763794 MiB name: PDU_Pool 00:05:57.277 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:57.277 size: 0.026123 MiB name: Session_Pool 00:05:57.277 end mempools------- 00:05:57.277 6 memzones totaling size 4.142822 MiB 00:05:57.277 size: 1.000366 MiB name: RG_ring_0_71002 00:05:57.277 size: 1.000366 MiB name: RG_ring_1_71002 00:05:57.277 size: 1.000366 MiB name: RG_ring_4_71002 00:05:57.277 size: 1.000366 MiB name: RG_ring_5_71002 00:05:57.277 size: 0.125366 MiB name: RG_ring_2_71002 00:05:57.277 size: 0.015991 MiB name: RG_ring_3_71002 00:05:57.277 end memzones------- 00:05:57.277 01:25:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:57.277 heap id: 0 total size: 860.000000 MiB number of busy elements: 303 number of free elements: 16 00:05:57.277 list of free elements. size: 13.812256 MiB 00:05:57.277 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:57.277 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:57.277 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:57.277 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:57.277 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:57.277 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:57.277 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:57.277 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:57.277 element at address: 0x200000200000 with size: 0.709839 MiB 00:05:57.277 element at address: 0x20001d800000 with size: 0.568237 MiB 00:05:57.277 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:57.277 element at address: 0x200003e00000 with size: 0.488464 MiB 00:05:57.277 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:57.277 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:57.277 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:57.277 element at address: 0x200003a00000 with size: 0.353210 MiB 00:05:57.277 list of standard malloc elements. size: 199.391052 MiB 00:05:57.277 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:57.277 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:57.277 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:57.277 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:57.277 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:57.277 element at address: 0x2000003b9f00 with size: 0.265747 MiB 00:05:57.277 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:57.277 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:57.277 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:57.277 element at address: 0x2000002b5b80 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b5c40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b5d00 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b5dc0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b5e80 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b5f40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6000 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b60c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6180 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6240 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6300 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b63c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6480 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6540 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6600 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b66c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b68c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6980 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6a40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6b00 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6bc0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6c80 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6d40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6e00 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6ec0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b6f80 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7040 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7100 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b71c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7280 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7340 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7400 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b74c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7580 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7640 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7700 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b77c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7880 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7940 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7a00 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7ac0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7b80 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000002b7c40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x2000003b9e40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003aff940 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:57.277 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003eff000 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:57.278 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:57.278 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:57.278 list of memzone associated elements. size: 646.796692 MiB 00:05:57.278 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:57.278 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:57.278 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:57.278 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:57.278 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:57.278 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71002_0 00:05:57.278 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:57.278 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71002_0 00:05:57.278 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:57.278 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71002_0 00:05:57.278 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:57.278 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71002_0 00:05:57.278 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:57.278 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:57.278 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:57.278 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:57.278 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:57.278 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71002 00:05:57.278 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:57.278 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71002 00:05:57.278 element at address: 0x2000002b7d00 with size: 1.008118 MiB 00:05:57.278 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71002 00:05:57.278 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:57.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:57.278 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:57.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:57.278 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:57.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:57.278 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:57.278 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:57.278 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:57.278 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71002 00:05:57.278 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:57.278 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71002 00:05:57.278 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:57.278 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71002 00:05:57.278 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:57.278 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71002 00:05:57.278 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:05:57.278 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71002 00:05:57.278 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:05:57.278 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71002 00:05:57.278 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:57.278 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:57.278 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:57.278 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:57.278 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:57.278 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:57.278 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:05:57.278 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71002 00:05:57.278 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:57.278 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:57.278 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:57.278 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:57.278 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:05:57.278 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71002 00:05:57.278 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:57.278 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:57.278 element at address: 0x2000002b6780 with size: 0.000305 MiB 00:05:57.278 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71002 00:05:57.278 element at address: 0x200003affa00 with size: 0.000305 MiB 00:05:57.278 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71002 00:05:57.278 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:05:57.278 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71002 00:05:57.278 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:57.278 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:57.279 01:25:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:57.279 01:25:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71002 00:05:57.279 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71002 ']' 00:05:57.279 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71002 00:05:57.279 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:57.279 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.279 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71002 00:05:57.537 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.537 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.537 killing process with pid 71002 00:05:57.537 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71002' 00:05:57.537 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71002 00:05:57.537 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71002 00:05:58.107 00:05:58.107 real 0m1.928s 00:05:58.107 user 0m1.669s 00:05:58.107 sys 0m0.690s 00:05:58.107 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.107 01:25:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.107 ************************************ 00:05:58.107 END TEST dpdk_mem_utility 00:05:58.107 ************************************ 00:05:58.107 01:25:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.107 01:25:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.107 01:25:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.107 01:25:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.107 ************************************ 00:05:58.107 START TEST event 00:05:58.107 ************************************ 00:05:58.107 01:25:56 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:58.368 * Looking for test storage... 00:05:58.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.368 01:25:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.368 01:25:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.368 01:25:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.368 01:25:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.368 01:25:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.368 01:25:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.368 01:25:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.368 01:25:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.368 01:25:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.368 01:25:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.368 01:25:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.368 01:25:57 event -- scripts/common.sh@344 -- # case "$op" in 00:05:58.368 01:25:57 event -- scripts/common.sh@345 -- # : 1 00:05:58.368 01:25:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.368 01:25:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.368 01:25:57 event -- scripts/common.sh@365 -- # decimal 1 00:05:58.368 01:25:57 event -- scripts/common.sh@353 -- # local d=1 00:05:58.368 01:25:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.368 01:25:57 event -- scripts/common.sh@355 -- # echo 1 00:05:58.368 01:25:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.368 01:25:57 event -- scripts/common.sh@366 -- # decimal 2 00:05:58.368 01:25:57 event -- scripts/common.sh@353 -- # local d=2 00:05:58.368 01:25:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.368 01:25:57 event -- scripts/common.sh@355 -- # echo 2 00:05:58.368 01:25:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.368 01:25:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.368 01:25:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.368 01:25:57 event -- scripts/common.sh@368 -- # return 0 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.368 --rc genhtml_branch_coverage=1 00:05:58.368 --rc genhtml_function_coverage=1 00:05:58.368 --rc genhtml_legend=1 00:05:58.368 --rc geninfo_all_blocks=1 00:05:58.368 --rc geninfo_unexecuted_blocks=1 00:05:58.368 00:05:58.368 ' 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.368 --rc genhtml_branch_coverage=1 00:05:58.368 --rc genhtml_function_coverage=1 00:05:58.368 --rc genhtml_legend=1 00:05:58.368 --rc geninfo_all_blocks=1 00:05:58.368 --rc geninfo_unexecuted_blocks=1 00:05:58.368 00:05:58.368 ' 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.368 --rc genhtml_branch_coverage=1 00:05:58.368 --rc genhtml_function_coverage=1 00:05:58.368 --rc genhtml_legend=1 00:05:58.368 --rc geninfo_all_blocks=1 00:05:58.368 --rc geninfo_unexecuted_blocks=1 00:05:58.368 00:05:58.368 ' 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.368 --rc genhtml_branch_coverage=1 00:05:58.368 --rc genhtml_function_coverage=1 00:05:58.368 --rc genhtml_legend=1 00:05:58.368 --rc geninfo_all_blocks=1 00:05:58.368 --rc geninfo_unexecuted_blocks=1 00:05:58.368 00:05:58.368 ' 00:05:58.368 01:25:57 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:58.368 01:25:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:58.368 01:25:57 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:58.368 01:25:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.368 01:25:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.368 ************************************ 00:05:58.368 START TEST event_perf 00:05:58.368 ************************************ 00:05:58.368 01:25:57 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:58.368 Running I/O for 1 seconds...[2024-10-09 01:25:57.172718] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:58.369 [2024-10-09 01:25:57.172861] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71082 ] 00:05:58.628 [2024-10-09 01:25:57.308941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.628 [2024-10-09 01:25:57.335923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.628 [2024-10-09 01:25:57.409323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.628 [2024-10-09 01:25:57.409561] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.628 Running I/O for 1 seconds...[2024-10-09 01:25:57.409847] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.628 [2024-10-09 01:25:57.409917] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.006 00:06:00.006 lcore 0: 104275 00:06:00.006 lcore 1: 104276 00:06:00.006 lcore 2: 104275 00:06:00.006 lcore 3: 104275 00:06:00.006 done. 00:06:00.006 ************************************ 00:06:00.006 END TEST event_perf 00:06:00.006 ************************************ 00:06:00.006 00:06:00.006 real 0m1.425s 00:06:00.006 user 0m4.162s 00:06:00.006 sys 0m0.145s 00:06:00.006 01:25:58 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.006 01:25:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.006 01:25:58 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.006 01:25:58 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:00.006 01:25:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.006 01:25:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.006 ************************************ 00:06:00.006 START TEST event_reactor 00:06:00.006 ************************************ 00:06:00.006 01:25:58 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:00.006 [2024-10-09 01:25:58.663437] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:00.006 [2024-10-09 01:25:58.663549] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71127 ] 00:06:00.006 [2024-10-09 01:25:58.793573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.006 [2024-10-09 01:25:58.821287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.006 [2024-10-09 01:25:58.888362] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.385 test_start 00:06:01.385 oneshot 00:06:01.385 tick 100 00:06:01.385 tick 100 00:06:01.385 tick 250 00:06:01.385 tick 100 00:06:01.385 tick 100 00:06:01.385 tick 100 00:06:01.385 tick 250 00:06:01.385 tick 500 00:06:01.385 tick 100 00:06:01.385 tick 100 00:06:01.385 tick 250 00:06:01.385 tick 100 00:06:01.385 tick 100 00:06:01.385 test_end 00:06:01.385 00:06:01.385 real 0m1.404s 00:06:01.385 user 0m1.173s 00:06:01.385 sys 0m0.124s 00:06:01.385 01:26:00 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.385 01:26:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:01.385 ************************************ 00:06:01.385 END TEST event_reactor 00:06:01.385 ************************************ 00:06:01.385 01:26:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.385 01:26:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:01.385 01:26:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.385 01:26:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.385 ************************************ 00:06:01.385 START TEST event_reactor_perf 00:06:01.385 ************************************ 00:06:01.385 01:26:00 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:01.385 [2024-10-09 01:26:00.137161] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:01.385 [2024-10-09 01:26:00.137284] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71158 ] 00:06:01.385 [2024-10-09 01:26:00.267003] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.645 [2024-10-09 01:26:00.295929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.645 [2024-10-09 01:26:00.368262] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.025 test_start 00:06:03.025 test_end 00:06:03.025 Performance: 402362 events per second 00:06:03.025 00:06:03.025 real 0m1.411s 00:06:03.025 user 0m1.178s 00:06:03.025 sys 0m0.126s 00:06:03.025 01:26:01 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.025 01:26:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.025 ************************************ 00:06:03.025 END TEST event_reactor_perf 00:06:03.025 ************************************ 00:06:03.025 01:26:01 event -- event/event.sh@49 -- # uname -s 00:06:03.025 01:26:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:03.025 01:26:01 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.025 01:26:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.025 01:26:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.025 01:26:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.025 ************************************ 00:06:03.025 START TEST event_scheduler 00:06:03.025 ************************************ 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:03.025 * Looking for test storage... 00:06:03.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.025 01:26:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:03.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.025 --rc genhtml_branch_coverage=1 00:06:03.025 --rc genhtml_function_coverage=1 00:06:03.025 --rc genhtml_legend=1 00:06:03.025 --rc geninfo_all_blocks=1 00:06:03.025 --rc geninfo_unexecuted_blocks=1 00:06:03.025 00:06:03.025 ' 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:03.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.025 --rc genhtml_branch_coverage=1 00:06:03.025 --rc genhtml_function_coverage=1 00:06:03.025 --rc genhtml_legend=1 00:06:03.025 --rc geninfo_all_blocks=1 00:06:03.025 --rc geninfo_unexecuted_blocks=1 00:06:03.025 00:06:03.025 ' 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:03.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.025 --rc genhtml_branch_coverage=1 00:06:03.025 --rc genhtml_function_coverage=1 00:06:03.025 --rc genhtml_legend=1 00:06:03.025 --rc geninfo_all_blocks=1 00:06:03.025 --rc geninfo_unexecuted_blocks=1 00:06:03.025 00:06:03.025 ' 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:03.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.025 --rc genhtml_branch_coverage=1 00:06:03.025 --rc genhtml_function_coverage=1 00:06:03.025 --rc genhtml_legend=1 00:06:03.025 --rc geninfo_all_blocks=1 00:06:03.025 --rc geninfo_unexecuted_blocks=1 00:06:03.025 00:06:03.025 ' 00:06:03.025 01:26:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:03.025 01:26:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71234 00:06:03.025 01:26:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:03.025 01:26:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.025 01:26:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71234 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71234 ']' 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.025 01:26:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.025 [2024-10-09 01:26:01.907746] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:03.025 [2024-10-09 01:26:01.907877] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71234 ] 00:06:03.285 [2024-10-09 01:26:02.046083] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.285 [2024-10-09 01:26:02.074979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.285 [2024-10-09 01:26:02.122840] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.285 [2024-10-09 01:26:02.122976] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.285 [2024-10-09 01:26:02.123066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.285 [2024-10-09 01:26:02.123178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.854 01:26:02 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.854 01:26:02 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:03.854 01:26:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:03.854 01:26:02 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.854 01:26:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:03.854 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.854 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.854 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.854 POWER: Cannot set governor of lcore 0 to performance 00:06:03.854 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.854 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.854 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:03.854 POWER: Cannot set governor of lcore 0 to userspace 00:06:03.854 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:03.854 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:03.854 POWER: Unable to set Power Management Environment for lcore 0 00:06:03.854 [2024-10-09 01:26:02.717125] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:03.854 [2024-10-09 01:26:02.717149] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:03.854 [2024-10-09 01:26:02.717161] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:03.854 [2024-10-09 01:26:02.717189] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:03.854 [2024-10-09 01:26:02.717212] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:03.854 [2024-10-09 01:26:02.717220] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:03.854 01:26:02 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.854 01:26:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:03.854 01:26:02 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.854 01:26:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 [2024-10-09 01:26:02.788271] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:04.115 01:26:02 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.115 01:26:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:04.115 01:26:02 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.115 01:26:02 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 ************************************ 00:06:04.115 START TEST scheduler_create_thread 00:06:04.115 ************************************ 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 2 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 3 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 4 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 5 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 6 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 7 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.115 8 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.115 01:26:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:04.686 9 00:06:04.686 01:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.686 01:26:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:04.686 01:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.686 01:26:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.626 10 00:06:05.626 01:26:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.885 01:26:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.885 01:26:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.885 01:26:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.821 01:26:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.821 01:26:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:06.821 01:26:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:06.821 01:26:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.821 01:26:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.387 01:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.388 01:26:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.388 01:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.388 01:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.955 01:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.955 01:26:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.955 01:26:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.955 01:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.955 01:26:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.522 ************************************ 00:06:08.522 END TEST scheduler_create_thread 00:06:08.522 ************************************ 00:06:08.522 01:26:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.522 00:06:08.522 real 0m4.608s 00:06:08.522 user 0m0.030s 00:06:08.522 sys 0m0.006s 00:06:08.522 01:26:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.522 01:26:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.781 01:26:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.781 01:26:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71234 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71234 ']' 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71234 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71234 00:06:08.781 killing process with pid 71234 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71234' 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71234 00:06:08.781 01:26:07 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71234 00:06:09.059 [2024-10-09 01:26:07.688420] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:09.355 00:06:09.355 real 0m6.415s 00:06:09.355 user 0m13.684s 00:06:09.355 sys 0m0.497s 00:06:09.355 01:26:07 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.355 01:26:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.355 ************************************ 00:06:09.355 END TEST event_scheduler 00:06:09.355 ************************************ 00:06:09.355 01:26:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:09.355 01:26:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:09.355 01:26:08 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.355 01:26:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.355 01:26:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.355 ************************************ 00:06:09.355 START TEST app_repeat 00:06:09.355 ************************************ 00:06:09.355 01:26:08 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71351 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.355 Process app_repeat pid: 71351 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71351' 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.355 spdk_app_start Round 0 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:09.355 01:26:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71351 /var/tmp/spdk-nbd.sock 00:06:09.355 01:26:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71351 ']' 00:06:09.355 01:26:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.355 01:26:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.355 01:26:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.356 01:26:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.356 01:26:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.356 [2024-10-09 01:26:08.136071] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:09.356 [2024-10-09 01:26:08.136211] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71351 ] 00:06:09.615 [2024-10-09 01:26:08.270567] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.615 [2024-10-09 01:26:08.296579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.615 [2024-10-09 01:26:08.367051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.615 [2024-10-09 01:26:08.367162] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.182 01:26:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.182 01:26:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.183 01:26:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.442 Malloc0 00:06:10.442 01:26:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.700 Malloc1 00:06:10.700 01:26:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.700 01:26:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.701 01:26:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.960 /dev/nbd0 00:06:10.960 01:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.960 01:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.960 1+0 records in 00:06:10.960 1+0 records out 00:06:10.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032588 s, 12.6 MB/s 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.960 01:26:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:10.960 01:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.960 01:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.960 01:26:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.319 /dev/nbd1 00:06:11.319 01:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.319 01:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.319 1+0 records in 00:06:11.319 1+0 records out 00:06:11.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372097 s, 11.0 MB/s 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.319 01:26:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.319 01:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.319 01:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.319 01:26:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.319 01:26:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.319 01:26:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.319 01:26:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.319 { 00:06:11.319 "nbd_device": "/dev/nbd0", 00:06:11.319 "bdev_name": "Malloc0" 00:06:11.319 }, 00:06:11.319 { 00:06:11.319 "nbd_device": "/dev/nbd1", 00:06:11.319 "bdev_name": "Malloc1" 00:06:11.319 } 00:06:11.319 ]' 00:06:11.319 01:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.319 { 00:06:11.319 "nbd_device": "/dev/nbd0", 00:06:11.319 "bdev_name": "Malloc0" 00:06:11.319 }, 00:06:11.319 { 00:06:11.319 "nbd_device": "/dev/nbd1", 00:06:11.319 "bdev_name": "Malloc1" 00:06:11.319 } 00:06:11.319 ]' 00:06:11.319 01:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.319 01:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.319 /dev/nbd1' 00:06:11.319 01:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.319 /dev/nbd1' 00:06:11.319 01:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.578 256+0 records in 00:06:11.578 256+0 records out 00:06:11.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128172 s, 81.8 MB/s 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.578 256+0 records in 00:06:11.578 256+0 records out 00:06:11.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175136 s, 59.9 MB/s 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.578 256+0 records in 00:06:11.578 256+0 records out 00:06:11.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219844 s, 47.7 MB/s 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.578 01:26:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.837 01:26:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.096 01:26:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.355 01:26:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.355 01:26:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.355 01:26:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.355 01:26:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.923 [2024-10-09 01:26:11.536545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.923 [2024-10-09 01:26:11.601217] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.923 [2024-10-09 01:26:11.601218] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.923 [2024-10-09 01:26:11.676994] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.923 [2024-10-09 01:26:11.677081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.456 spdk_app_start Round 1 00:06:15.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.456 01:26:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.456 01:26:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.456 01:26:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71351 /var/tmp/spdk-nbd.sock 00:06:15.456 01:26:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71351 ']' 00:06:15.456 01:26:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.456 01:26:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.456 01:26:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.456 01:26:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.456 01:26:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.715 01:26:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.715 01:26:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:15.715 01:26:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.974 Malloc0 00:06:15.974 01:26:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.974 Malloc1 00:06:15.974 01:26:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.974 01:26:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.233 /dev/nbd0 00:06:16.233 01:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.233 01:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.233 1+0 records in 00:06:16.233 1+0 records out 00:06:16.233 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337908 s, 12.1 MB/s 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.233 01:26:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:16.233 01:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.233 01:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.233 01:26:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.492 /dev/nbd1 00:06:16.492 01:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.492 01:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.492 1+0 records in 00:06:16.492 1+0 records out 00:06:16.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359407 s, 11.4 MB/s 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:16.492 01:26:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:16.492 01:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.492 01:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.492 01:26:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.492 01:26:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.492 01:26:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.750 01:26:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.750 { 00:06:16.750 "nbd_device": "/dev/nbd0", 00:06:16.750 "bdev_name": "Malloc0" 00:06:16.750 }, 00:06:16.750 { 00:06:16.750 "nbd_device": "/dev/nbd1", 00:06:16.750 "bdev_name": "Malloc1" 00:06:16.750 } 00:06:16.750 ]' 00:06:16.750 01:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.750 { 00:06:16.750 "nbd_device": "/dev/nbd0", 00:06:16.750 "bdev_name": "Malloc0" 00:06:16.750 }, 00:06:16.750 { 00:06:16.750 "nbd_device": "/dev/nbd1", 00:06:16.750 "bdev_name": "Malloc1" 00:06:16.750 } 00:06:16.750 ]' 00:06:16.750 01:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.750 01:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.750 /dev/nbd1' 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.751 /dev/nbd1' 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.751 01:26:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.010 256+0 records in 00:06:17.010 256+0 records out 00:06:17.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00549835 s, 191 MB/s 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.010 256+0 records in 00:06:17.010 256+0 records out 00:06:17.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209739 s, 50.0 MB/s 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.010 256+0 records in 00:06:17.010 256+0 records out 00:06:17.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220835 s, 47.5 MB/s 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.010 01:26:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.268 01:26:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.268 01:26:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.268 01:26:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.268 01:26:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.268 01:26:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.268 01:26:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.268 01:26:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.268 01:26:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.268 01:26:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.533 01:26:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.534 01:26:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.534 01:26:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.806 01:26:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:18.065 [2024-10-09 01:26:16.946003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.325 [2024-10-09 01:26:17.012243] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.325 [2024-10-09 01:26:17.012297] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.325 [2024-10-09 01:26:17.089966] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.325 [2024-10-09 01:26:17.090051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.855 spdk_app_start Round 2 00:06:20.855 01:26:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.855 01:26:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.855 01:26:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71351 /var/tmp/spdk-nbd.sock 00:06:20.855 01:26:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71351 ']' 00:06:20.855 01:26:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.855 01:26:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.855 01:26:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.855 01:26:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.855 01:26:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.114 01:26:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.114 01:26:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:21.114 01:26:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.372 Malloc0 00:06:21.373 01:26:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.632 Malloc1 00:06:21.632 01:26:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.632 01:26:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.632 /dev/nbd0 00:06:21.891 01:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.891 01:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.891 1+0 records in 00:06:21.891 1+0 records out 00:06:21.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318739 s, 12.9 MB/s 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:21.891 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.891 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.891 01:26:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.891 /dev/nbd1 00:06:21.891 01:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.891 01:26:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:21.891 01:26:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:21.892 01:26:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.151 1+0 records in 00:06:22.151 1+0 records out 00:06:22.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277726 s, 14.7 MB/s 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.151 01:26:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:22.151 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.151 01:26:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.151 01:26:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.151 01:26:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.151 01:26:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.151 01:26:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.151 { 00:06:22.151 "nbd_device": "/dev/nbd0", 00:06:22.151 "bdev_name": "Malloc0" 00:06:22.151 }, 00:06:22.151 { 00:06:22.151 "nbd_device": "/dev/nbd1", 00:06:22.151 "bdev_name": "Malloc1" 00:06:22.151 } 00:06:22.151 ]' 00:06:22.151 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.151 { 00:06:22.151 "nbd_device": "/dev/nbd0", 00:06:22.151 "bdev_name": "Malloc0" 00:06:22.151 }, 00:06:22.151 { 00:06:22.151 "nbd_device": "/dev/nbd1", 00:06:22.151 "bdev_name": "Malloc1" 00:06:22.151 } 00:06:22.151 ]' 00:06:22.151 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.409 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.409 /dev/nbd1' 00:06:22.409 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.409 /dev/nbd1' 00:06:22.409 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.409 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:22.409 01:26:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:22.409 01:26:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:22.409 01:26:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:22.410 256+0 records in 00:06:22.410 256+0 records out 00:06:22.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00709729 s, 148 MB/s 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.410 256+0 records in 00:06:22.410 256+0 records out 00:06:22.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196775 s, 53.3 MB/s 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.410 256+0 records in 00:06:22.410 256+0 records out 00:06:22.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227117 s, 46.2 MB/s 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.410 01:26:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.669 01:26:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.927 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.186 01:26:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.186 01:26:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.445 01:26:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.703 [2024-10-09 01:26:22.377720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.703 [2024-10-09 01:26:22.438809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.703 [2024-10-09 01:26:22.438818] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.703 [2024-10-09 01:26:22.513287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.703 [2024-10-09 01:26:22.513352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.236 01:26:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71351 /var/tmp/spdk-nbd.sock 00:06:26.236 01:26:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71351 ']' 00:06:26.236 01:26:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.236 01:26:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.236 01:26:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.236 01:26:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.236 01:26:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:26.495 01:26:25 event.app_repeat -- event/event.sh@39 -- # killprocess 71351 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71351 ']' 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71351 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71351 00:06:26.495 killing process with pid 71351 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71351' 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71351 00:06:26.495 01:26:25 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71351 00:06:26.755 spdk_app_start is called in Round 0. 00:06:26.755 Shutdown signal received, stop current app iteration 00:06:26.755 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:06:26.755 spdk_app_start is called in Round 1. 00:06:26.755 Shutdown signal received, stop current app iteration 00:06:26.755 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:06:26.755 spdk_app_start is called in Round 2. 00:06:26.755 Shutdown signal received, stop current app iteration 00:06:26.755 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:06:26.755 spdk_app_start is called in Round 3. 00:06:26.755 Shutdown signal received, stop current app iteration 00:06:27.014 01:26:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:27.015 01:26:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:27.015 00:06:27.015 real 0m17.573s 00:06:27.015 user 0m37.913s 00:06:27.015 sys 0m2.983s 00:06:27.015 01:26:25 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.015 01:26:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.015 ************************************ 00:06:27.015 END TEST app_repeat 00:06:27.015 ************************************ 00:06:27.015 01:26:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:27.015 01:26:25 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:27.015 01:26:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.015 01:26:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.015 01:26:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.015 ************************************ 00:06:27.015 START TEST cpu_locks 00:06:27.015 ************************************ 00:06:27.015 01:26:25 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:27.015 * Looking for test storage... 00:06:27.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:27.015 01:26:25 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.015 01:26:25 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.015 01:26:25 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.274 01:26:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.274 --rc genhtml_branch_coverage=1 00:06:27.274 --rc genhtml_function_coverage=1 00:06:27.274 --rc genhtml_legend=1 00:06:27.274 --rc geninfo_all_blocks=1 00:06:27.274 --rc geninfo_unexecuted_blocks=1 00:06:27.274 00:06:27.274 ' 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.274 --rc genhtml_branch_coverage=1 00:06:27.274 --rc genhtml_function_coverage=1 00:06:27.274 --rc genhtml_legend=1 00:06:27.274 --rc geninfo_all_blocks=1 00:06:27.274 --rc geninfo_unexecuted_blocks=1 00:06:27.274 00:06:27.274 ' 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.274 --rc genhtml_branch_coverage=1 00:06:27.274 --rc genhtml_function_coverage=1 00:06:27.274 --rc genhtml_legend=1 00:06:27.274 --rc geninfo_all_blocks=1 00:06:27.274 --rc geninfo_unexecuted_blocks=1 00:06:27.274 00:06:27.274 ' 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.274 --rc genhtml_branch_coverage=1 00:06:27.274 --rc genhtml_function_coverage=1 00:06:27.274 --rc genhtml_legend=1 00:06:27.274 --rc geninfo_all_blocks=1 00:06:27.274 --rc geninfo_unexecuted_blocks=1 00:06:27.274 00:06:27.274 ' 00:06:27.274 01:26:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:27.274 01:26:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:27.274 01:26:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:27.274 01:26:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.274 01:26:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.274 ************************************ 00:06:27.274 START TEST default_locks 00:06:27.274 ************************************ 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71776 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71776 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71776 ']' 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.274 01:26:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.274 [2024-10-09 01:26:26.035243] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:27.274 [2024-10-09 01:26:26.035376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71776 ] 00:06:27.533 [2024-10-09 01:26:26.166171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.533 [2024-10-09 01:26:26.196447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.533 [2024-10-09 01:26:26.267882] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.098 01:26:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.098 01:26:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:28.098 01:26:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71776 00:06:28.098 01:26:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71776 00:06:28.098 01:26:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71776 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71776 ']' 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71776 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71776 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.664 killing process with pid 71776 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71776' 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71776 00:06:28.664 01:26:27 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71776 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71776 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71776 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71776 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71776 ']' 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.232 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71776) - No such process 00:06:29.232 ERROR: process (pid: 71776) is no longer running 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.232 00:06:29.232 real 0m2.119s 00:06:29.232 user 0m1.918s 00:06:29.232 sys 0m0.830s 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.232 01:26:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.232 ************************************ 00:06:29.232 END TEST default_locks 00:06:29.232 ************************************ 00:06:29.232 01:26:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:29.232 01:26:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.232 01:26:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.232 01:26:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.491 ************************************ 00:06:29.492 START TEST default_locks_via_rpc 00:06:29.492 ************************************ 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71837 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71837 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71837 ']' 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.492 01:26:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.492 [2024-10-09 01:26:28.221068] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:29.492 [2024-10-09 01:26:28.221205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71837 ] 00:06:29.492 [2024-10-09 01:26:28.353233] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.492 [2024-10-09 01:26:28.380699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.750 [2024-10-09 01:26:28.451452] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.316 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.316 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:30.316 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:30.316 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.316 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71837 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71837 00:06:30.317 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71837 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71837 ']' 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71837 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71837 00:06:30.575 killing process with pid 71837 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71837' 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71837 00:06:30.575 01:26:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71837 00:06:31.143 00:06:31.143 real 0m1.876s 00:06:31.143 user 0m1.689s 00:06:31.143 sys 0m0.703s 00:06:31.143 01:26:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.143 ************************************ 00:06:31.143 END TEST default_locks_via_rpc 00:06:31.143 ************************************ 00:06:31.143 01:26:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.403 01:26:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.403 01:26:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.403 01:26:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.403 01:26:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.403 ************************************ 00:06:31.403 START TEST non_locking_app_on_locked_coremask 00:06:31.403 ************************************ 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71883 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71883 /var/tmp/spdk.sock 00:06:31.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71883 ']' 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.403 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.404 [2024-10-09 01:26:30.169117] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:31.404 [2024-10-09 01:26:30.169253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71883 ] 00:06:31.663 [2024-10-09 01:26:30.300853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.663 [2024-10-09 01:26:30.329905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.663 [2024-10-09 01:26:30.398752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71894 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71894 /var/tmp/spdk2.sock 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71894 ']' 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.230 01:26:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.230 [2024-10-09 01:26:31.033609] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:32.230 [2024-10-09 01:26:31.033798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71894 ] 00:06:32.489 [2024-10-09 01:26:31.168540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.489 [2024-10-09 01:26:31.186311] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.489 [2024-10-09 01:26:31.186353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.489 [2024-10-09 01:26:31.326974] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.423 01:26:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.423 01:26:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:33.423 01:26:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71883 00:06:33.423 01:26:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71883 00:06:33.423 01:26:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71883 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71883 ']' 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71883 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71883 00:06:33.682 killing process with pid 71883 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71883' 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71883 00:06:33.682 01:26:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71883 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71894 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71894 ']' 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71894 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71894 00:06:35.057 killing process with pid 71894 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71894' 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71894 00:06:35.057 01:26:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71894 00:06:35.635 ************************************ 00:06:35.635 END TEST non_locking_app_on_locked_coremask 00:06:35.635 ************************************ 00:06:35.635 00:06:35.635 real 0m4.388s 00:06:35.635 user 0m4.271s 00:06:35.635 sys 0m1.353s 00:06:35.635 01:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.635 01:26:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.635 01:26:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.635 01:26:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.635 01:26:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.635 01:26:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.910 ************************************ 00:06:35.910 START TEST locking_app_on_unlocked_coremask 00:06:35.910 ************************************ 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71974 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71974 /var/tmp/spdk.sock 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71974 ']' 00:06:35.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.910 01:26:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.910 [2024-10-09 01:26:34.626954] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:35.910 [2024-10-09 01:26:34.627100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71974 ] 00:06:35.910 [2024-10-09 01:26:34.763201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.910 [2024-10-09 01:26:34.790150] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.910 [2024-10-09 01:26:34.790235] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.169 [2024-10-09 01:26:34.860151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71990 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71990 /var/tmp/spdk2.sock 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71990 ']' 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.735 01:26:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.735 [2024-10-09 01:26:35.493844] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:36.735 [2024-10-09 01:26:35.494078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71990 ] 00:06:36.993 [2024-10-09 01:26:35.627668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.993 [2024-10-09 01:26:35.645690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.993 [2024-10-09 01:26:35.788881] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.560 01:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.560 01:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:37.560 01:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71990 00:06:37.560 01:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71990 00:06:37.560 01:26:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.494 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71974 00:06:38.494 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71974 ']' 00:06:38.494 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71974 00:06:38.494 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:38.494 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.494 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71974 00:06:38.752 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.752 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.752 killing process with pid 71974 00:06:38.752 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71974' 00:06:38.752 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71974 00:06:38.752 01:26:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71974 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71990 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71990 ']' 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71990 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71990 00:06:40.127 killing process with pid 71990 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71990' 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71990 00:06:40.127 01:26:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71990 00:06:40.694 00:06:40.694 real 0m4.870s 00:06:40.694 user 0m4.745s 00:06:40.694 sys 0m1.573s 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.694 ************************************ 00:06:40.694 END TEST locking_app_on_unlocked_coremask 00:06:40.694 ************************************ 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.694 01:26:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.694 01:26:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.694 01:26:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.694 01:26:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.694 ************************************ 00:06:40.694 START TEST locking_app_on_locked_coremask 00:06:40.694 ************************************ 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72069 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72069 /var/tmp/spdk.sock 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72069 ']' 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.694 01:26:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.694 [2024-10-09 01:26:39.565358] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:40.694 [2024-10-09 01:26:39.565497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72069 ] 00:06:40.953 [2024-10-09 01:26:39.702289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.953 [2024-10-09 01:26:39.732510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.953 [2024-10-09 01:26:39.817063] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72075 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72075 /var/tmp/spdk2.sock 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72075 /var/tmp/spdk2.sock 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72075 /var/tmp/spdk2.sock 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72075 ']' 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.520 01:26:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.778 [2024-10-09 01:26:40.471872] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:41.778 [2024-10-09 01:26:40.472119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72075 ] 00:06:41.778 [2024-10-09 01:26:40.607087] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.778 [2024-10-09 01:26:40.624712] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72069 has claimed it. 00:06:41.778 [2024-10-09 01:26:40.624763] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.344 ERROR: process (pid: 72075) is no longer running 00:06:42.344 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72075) - No such process 00:06:42.344 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.344 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:42.345 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:42.345 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.345 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.345 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.345 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72069 00:06:42.345 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72069 00:06:42.345 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72069 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72069 ']' 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72069 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72069 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.912 killing process with pid 72069 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72069' 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72069 00:06:42.912 01:26:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72069 00:06:43.483 ************************************ 00:06:43.483 END TEST locking_app_on_locked_coremask 00:06:43.483 ************************************ 00:06:43.483 00:06:43.483 real 0m2.819s 00:06:43.483 user 0m2.812s 00:06:43.483 sys 0m0.986s 00:06:43.483 01:26:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.483 01:26:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.483 01:26:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:43.483 01:26:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.483 01:26:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.483 01:26:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.483 ************************************ 00:06:43.483 START TEST locking_overlapped_coremask 00:06:43.483 ************************************ 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72128 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72128 /var/tmp/spdk.sock 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72128 ']' 00:06:43.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.483 01:26:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.742 [2024-10-09 01:26:42.459963] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:43.742 [2024-10-09 01:26:42.460115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72128 ] 00:06:43.742 [2024-10-09 01:26:42.598321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.742 [2024-10-09 01:26:42.627479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.000 [2024-10-09 01:26:42.711607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.001 [2024-10-09 01:26:42.711629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.001 [2024-10-09 01:26:42.711721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72146 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72146 /var/tmp/spdk2.sock 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72146 /var/tmp/spdk2.sock 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72146 /var/tmp/spdk2.sock 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72146 ']' 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.567 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 [2024-10-09 01:26:43.365489] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:44.567 [2024-10-09 01:26:43.365756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72146 ] 00:06:44.824 [2024-10-09 01:26:43.503967] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.824 [2024-10-09 01:26:43.520967] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72128 has claimed it. 00:06:44.824 [2024-10-09 01:26:43.521026] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.087 ERROR: process (pid: 72146) is no longer running 00:06:45.087 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72146) - No such process 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72128 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72128 ']' 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72128 00:06:45.087 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.346 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.346 01:26:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72128 00:06:45.346 01:26:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.346 01:26:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.346 01:26:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72128' 00:06:45.346 killing process with pid 72128 00:06:45.346 01:26:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72128 00:06:45.346 01:26:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72128 00:06:45.913 00:06:45.913 real 0m2.333s 00:06:45.913 user 0m5.871s 00:06:45.913 sys 0m0.726s 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.913 ************************************ 00:06:45.913 END TEST locking_overlapped_coremask 00:06:45.913 ************************************ 00:06:45.913 01:26:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:45.913 01:26:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.913 01:26:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.913 01:26:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.913 ************************************ 00:06:45.913 START TEST locking_overlapped_coremask_via_rpc 00:06:45.913 ************************************ 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72198 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72198 /var/tmp/spdk.sock 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72198 ']' 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.913 01:26:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.171 [2024-10-09 01:26:44.857977] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:46.171 [2024-10-09 01:26:44.858202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72198 ] 00:06:46.171 [2024-10-09 01:26:44.991295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.171 [2024-10-09 01:26:45.020838] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.171 [2024-10-09 01:26:45.020911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.429 [2024-10-09 01:26:45.103442] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.429 [2024-10-09 01:26:45.103553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.429 [2024-10-09 01:26:45.103685] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72212 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72212 /var/tmp/spdk2.sock 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72212 ']' 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.996 01:26:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.996 [2024-10-09 01:26:45.731196] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:46.996 [2024-10-09 01:26:45.731423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72212 ] 00:06:46.996 [2024-10-09 01:26:45.864040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.996 [2024-10-09 01:26:45.881155] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.996 [2024-10-09 01:26:45.881201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.253 [2024-10-09 01:26:45.977157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:47.253 [2024-10-09 01:26:45.980658] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.253 [2024-10-09 01:26:45.980769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:47.819 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.820 [2024-10-09 01:26:46.593705] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72198 has claimed it. 00:06:47.820 request: 00:06:47.820 { 00:06:47.820 "method": "framework_enable_cpumask_locks", 00:06:47.820 "req_id": 1 00:06:47.820 } 00:06:47.820 Got JSON-RPC error response 00:06:47.820 response: 00:06:47.820 { 00:06:47.820 "code": -32603, 00:06:47.820 "message": "Failed to claim CPU core: 2" 00:06:47.820 } 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72198 /var/tmp/spdk.sock 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72198 ']' 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.820 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72212 /var/tmp/spdk2.sock 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72212 ']' 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.078 01:26:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.337 01:26:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.337 01:26:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.337 01:26:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:48.337 01:26:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.337 01:26:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.337 01:26:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.337 00:06:48.337 real 0m2.280s 00:06:48.337 user 0m1.032s 00:06:48.337 sys 0m0.168s 00:06:48.337 ************************************ 00:06:48.337 END TEST locking_overlapped_coremask_via_rpc 00:06:48.337 ************************************ 00:06:48.337 01:26:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.337 01:26:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.337 01:26:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:48.337 01:26:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72198 ]] 00:06:48.337 01:26:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72198 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72198 ']' 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72198 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72198 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72198' 00:06:48.337 killing process with pid 72198 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72198 00:06:48.337 01:26:47 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72198 00:06:49.273 01:26:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72212 ]] 00:06:49.273 01:26:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72212 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72212 ']' 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72212 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72212 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:49.273 killing process with pid 72212 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72212' 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72212 00:06:49.273 01:26:47 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72212 00:06:49.532 01:26:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.532 01:26:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:49.532 01:26:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72198 ]] 00:06:49.532 01:26:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72198 00:06:49.532 Process with pid 72198 is not found 00:06:49.532 Process with pid 72212 is not found 00:06:49.532 01:26:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72198 ']' 00:06:49.532 01:26:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72198 00:06:49.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72198) - No such process 00:06:49.532 01:26:48 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72198 is not found' 00:06:49.532 01:26:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72212 ]] 00:06:49.532 01:26:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72212 00:06:49.532 01:26:48 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72212 ']' 00:06:49.533 01:26:48 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72212 00:06:49.533 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72212) - No such process 00:06:49.533 01:26:48 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72212 is not found' 00:06:49.533 01:26:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.533 00:06:49.533 real 0m22.561s 00:06:49.533 user 0m34.827s 00:06:49.533 sys 0m7.623s 00:06:49.533 01:26:48 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.533 01:26:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.533 ************************************ 00:06:49.533 END TEST cpu_locks 00:06:49.533 ************************************ 00:06:49.533 00:06:49.533 real 0m51.436s 00:06:49.533 user 1m33.197s 00:06:49.533 sys 0m11.901s 00:06:49.533 01:26:48 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.533 01:26:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.533 ************************************ 00:06:49.533 END TEST event 00:06:49.533 ************************************ 00:06:49.533 01:26:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:49.533 01:26:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.533 01:26:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.533 01:26:48 -- common/autotest_common.sh@10 -- # set +x 00:06:49.533 ************************************ 00:06:49.533 START TEST thread 00:06:49.533 ************************************ 00:06:49.533 01:26:48 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:49.792 * Looking for test storage... 00:06:49.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:49.792 01:26:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.792 01:26:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.792 01:26:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.792 01:26:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.792 01:26:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.792 01:26:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.792 01:26:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.792 01:26:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.792 01:26:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.792 01:26:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.792 01:26:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.792 01:26:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:49.792 01:26:48 thread -- scripts/common.sh@345 -- # : 1 00:06:49.792 01:26:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.792 01:26:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.792 01:26:48 thread -- scripts/common.sh@365 -- # decimal 1 00:06:49.792 01:26:48 thread -- scripts/common.sh@353 -- # local d=1 00:06:49.792 01:26:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.792 01:26:48 thread -- scripts/common.sh@355 -- # echo 1 00:06:49.792 01:26:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.792 01:26:48 thread -- scripts/common.sh@366 -- # decimal 2 00:06:49.792 01:26:48 thread -- scripts/common.sh@353 -- # local d=2 00:06:49.792 01:26:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.792 01:26:48 thread -- scripts/common.sh@355 -- # echo 2 00:06:49.792 01:26:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.792 01:26:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.792 01:26:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.792 01:26:48 thread -- scripts/common.sh@368 -- # return 0 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:49.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.792 --rc genhtml_branch_coverage=1 00:06:49.792 --rc genhtml_function_coverage=1 00:06:49.792 --rc genhtml_legend=1 00:06:49.792 --rc geninfo_all_blocks=1 00:06:49.792 --rc geninfo_unexecuted_blocks=1 00:06:49.792 00:06:49.792 ' 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:49.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.792 --rc genhtml_branch_coverage=1 00:06:49.792 --rc genhtml_function_coverage=1 00:06:49.792 --rc genhtml_legend=1 00:06:49.792 --rc geninfo_all_blocks=1 00:06:49.792 --rc geninfo_unexecuted_blocks=1 00:06:49.792 00:06:49.792 ' 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:49.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.792 --rc genhtml_branch_coverage=1 00:06:49.792 --rc genhtml_function_coverage=1 00:06:49.792 --rc genhtml_legend=1 00:06:49.792 --rc geninfo_all_blocks=1 00:06:49.792 --rc geninfo_unexecuted_blocks=1 00:06:49.792 00:06:49.792 ' 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:49.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.792 --rc genhtml_branch_coverage=1 00:06:49.792 --rc genhtml_function_coverage=1 00:06:49.792 --rc genhtml_legend=1 00:06:49.792 --rc geninfo_all_blocks=1 00:06:49.792 --rc geninfo_unexecuted_blocks=1 00:06:49.792 00:06:49.792 ' 00:06:49.792 01:26:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.792 01:26:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.792 ************************************ 00:06:49.792 START TEST thread_poller_perf 00:06:49.792 ************************************ 00:06:49.792 01:26:48 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:49.792 [2024-10-09 01:26:48.678009] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:49.792 [2024-10-09 01:26:48.678188] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72346 ] 00:06:50.050 [2024-10-09 01:26:48.814727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.050 [2024-10-09 01:26:48.841832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.050 [2024-10-09 01:26:48.916448] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.050 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:51.438 [2024-10-09T01:26:50.331Z] ====================================== 00:06:51.438 [2024-10-09T01:26:50.331Z] busy:2302780278 (cyc) 00:06:51.438 [2024-10-09T01:26:50.331Z] total_run_count: 418000 00:06:51.438 [2024-10-09T01:26:50.331Z] tsc_hz: 2294600000 (cyc) 00:06:51.438 [2024-10-09T01:26:50.331Z] ====================================== 00:06:51.438 [2024-10-09T01:26:50.331Z] poller_cost: 5509 (cyc), 2400 (nsec) 00:06:51.438 00:06:51.438 real 0m1.431s 00:06:51.438 user 0m1.196s 00:06:51.438 sys 0m0.129s 00:06:51.438 01:26:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.438 01:26:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:51.438 ************************************ 00:06:51.438 END TEST thread_poller_perf 00:06:51.438 ************************************ 00:06:51.438 01:26:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.438 01:26:50 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:51.438 01:26:50 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.438 01:26:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.438 ************************************ 00:06:51.438 START TEST thread_poller_perf 00:06:51.438 ************************************ 00:06:51.438 01:26:50 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.438 [2024-10-09 01:26:50.175345] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:51.438 [2024-10-09 01:26:50.175525] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72388 ] 00:06:51.438 [2024-10-09 01:26:50.310860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.697 [2024-10-09 01:26:50.339567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.697 [2024-10-09 01:26:50.412773] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.697 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:53.074 [2024-10-09T01:26:51.967Z] ====================================== 00:06:53.074 [2024-10-09T01:26:51.967Z] busy:2298339356 (cyc) 00:06:53.074 [2024-10-09T01:26:51.967Z] total_run_count: 5550000 00:06:53.074 [2024-10-09T01:26:51.967Z] tsc_hz: 2294600000 (cyc) 00:06:53.074 [2024-10-09T01:26:51.967Z] ====================================== 00:06:53.074 [2024-10-09T01:26:51.967Z] poller_cost: 414 (cyc), 180 (nsec) 00:06:53.074 00:06:53.074 real 0m1.420s 00:06:53.074 user 0m1.184s 00:06:53.074 sys 0m0.128s 00:06:53.074 ************************************ 00:06:53.074 END TEST thread_poller_perf 00:06:53.074 ************************************ 00:06:53.074 01:26:51 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.074 01:26:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.074 01:26:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.074 00:06:53.074 real 0m3.214s 00:06:53.074 user 0m2.536s 00:06:53.074 sys 0m0.480s 00:06:53.074 ************************************ 00:06:53.074 END TEST thread 00:06:53.074 ************************************ 00:06:53.074 01:26:51 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.074 01:26:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.074 01:26:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:53.074 01:26:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.074 01:26:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.074 01:26:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.074 01:26:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.074 ************************************ 00:06:53.074 START TEST app_cmdline 00:06:53.074 ************************************ 00:06:53.074 01:26:51 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.074 * Looking for test storage... 00:06:53.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:53.074 01:26:51 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:53.074 01:26:51 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:53.074 01:26:51 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:53.074 01:26:51 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:53.074 01:26:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.075 01:26:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:53.075 01:26:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:53.075 01:26:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.075 01:26:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:53.075 01:26:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.075 01:26:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.075 01:26:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.075 01:26:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:53.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.075 --rc genhtml_branch_coverage=1 00:06:53.075 --rc genhtml_function_coverage=1 00:06:53.075 --rc genhtml_legend=1 00:06:53.075 --rc geninfo_all_blocks=1 00:06:53.075 --rc geninfo_unexecuted_blocks=1 00:06:53.075 00:06:53.075 ' 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:53.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.075 --rc genhtml_branch_coverage=1 00:06:53.075 --rc genhtml_function_coverage=1 00:06:53.075 --rc genhtml_legend=1 00:06:53.075 --rc geninfo_all_blocks=1 00:06:53.075 --rc geninfo_unexecuted_blocks=1 00:06:53.075 00:06:53.075 ' 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:53.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.075 --rc genhtml_branch_coverage=1 00:06:53.075 --rc genhtml_function_coverage=1 00:06:53.075 --rc genhtml_legend=1 00:06:53.075 --rc geninfo_all_blocks=1 00:06:53.075 --rc geninfo_unexecuted_blocks=1 00:06:53.075 00:06:53.075 ' 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:53.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.075 --rc genhtml_branch_coverage=1 00:06:53.075 --rc genhtml_function_coverage=1 00:06:53.075 --rc genhtml_legend=1 00:06:53.075 --rc geninfo_all_blocks=1 00:06:53.075 --rc geninfo_unexecuted_blocks=1 00:06:53.075 00:06:53.075 ' 00:06:53.075 01:26:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:53.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.075 01:26:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72475 00:06:53.075 01:26:51 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:53.075 01:26:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72475 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72475 ']' 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.075 01:26:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:53.334 [2024-10-09 01:26:51.987906] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:53.334 [2024-10-09 01:26:51.988119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72475 ] 00:06:53.334 [2024-10-09 01:26:52.119007] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.334 [2024-10-09 01:26:52.147659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.334 [2024-10-09 01:26:52.219149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.271 01:26:52 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.271 01:26:52 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:54.271 01:26:52 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:54.271 { 00:06:54.271 "version": "SPDK v25.01-pre git sha1 92108e0a2", 00:06:54.271 "fields": { 00:06:54.271 "major": 25, 00:06:54.271 "minor": 1, 00:06:54.271 "patch": 0, 00:06:54.271 "suffix": "-pre", 00:06:54.271 "commit": "92108e0a2" 00:06:54.271 } 00:06:54.271 } 00:06:54.271 01:26:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:54.271 01:26:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:54.271 01:26:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:54.271 01:26:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:54.271 01:26:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:54.271 01:26:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:54.271 01:26:52 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.271 01:26:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.271 01:26:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:54.271 01:26:52 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.271 01:26:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:54.271 01:26:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:54.271 01:26:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:54.271 01:26:53 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:54.530 request: 00:06:54.530 { 00:06:54.530 "method": "env_dpdk_get_mem_stats", 00:06:54.530 "req_id": 1 00:06:54.530 } 00:06:54.530 Got JSON-RPC error response 00:06:54.530 response: 00:06:54.530 { 00:06:54.530 "code": -32601, 00:06:54.530 "message": "Method not found" 00:06:54.530 } 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.530 01:26:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72475 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72475 ']' 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72475 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72475 00:06:54.530 killing process with pid 72475 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72475' 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@969 -- # kill 72475 00:06:54.530 01:26:53 app_cmdline -- common/autotest_common.sh@974 -- # wait 72475 00:06:55.099 00:06:55.099 real 0m2.238s 00:06:55.099 user 0m2.283s 00:06:55.099 sys 0m0.698s 00:06:55.099 ************************************ 00:06:55.099 END TEST app_cmdline 00:06:55.099 ************************************ 00:06:55.099 01:26:53 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.099 01:26:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.099 01:26:53 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.099 01:26:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.099 01:26:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.099 01:26:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.099 ************************************ 00:06:55.099 START TEST version 00:06:55.099 ************************************ 00:06:55.099 01:26:53 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.359 * Looking for test storage... 00:06:55.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.359 01:26:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.359 01:26:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.359 01:26:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.359 01:26:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.359 01:26:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.359 01:26:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.359 01:26:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.359 01:26:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.359 01:26:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.359 01:26:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.359 01:26:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.359 01:26:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:55.359 01:26:54 version -- scripts/common.sh@345 -- # : 1 00:06:55.359 01:26:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.359 01:26:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.359 01:26:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:55.359 01:26:54 version -- scripts/common.sh@353 -- # local d=1 00:06:55.359 01:26:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.359 01:26:54 version -- scripts/common.sh@355 -- # echo 1 00:06:55.359 01:26:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.359 01:26:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:55.359 01:26:54 version -- scripts/common.sh@353 -- # local d=2 00:06:55.359 01:26:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.359 01:26:54 version -- scripts/common.sh@355 -- # echo 2 00:06:55.359 01:26:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.359 01:26:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.359 01:26:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.359 01:26:54 version -- scripts/common.sh@368 -- # return 0 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.359 --rc genhtml_branch_coverage=1 00:06:55.359 --rc genhtml_function_coverage=1 00:06:55.359 --rc genhtml_legend=1 00:06:55.359 --rc geninfo_all_blocks=1 00:06:55.359 --rc geninfo_unexecuted_blocks=1 00:06:55.359 00:06:55.359 ' 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.359 --rc genhtml_branch_coverage=1 00:06:55.359 --rc genhtml_function_coverage=1 00:06:55.359 --rc genhtml_legend=1 00:06:55.359 --rc geninfo_all_blocks=1 00:06:55.359 --rc geninfo_unexecuted_blocks=1 00:06:55.359 00:06:55.359 ' 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.359 --rc genhtml_branch_coverage=1 00:06:55.359 --rc genhtml_function_coverage=1 00:06:55.359 --rc genhtml_legend=1 00:06:55.359 --rc geninfo_all_blocks=1 00:06:55.359 --rc geninfo_unexecuted_blocks=1 00:06:55.359 00:06:55.359 ' 00:06:55.359 01:26:54 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.359 --rc genhtml_branch_coverage=1 00:06:55.359 --rc genhtml_function_coverage=1 00:06:55.359 --rc genhtml_legend=1 00:06:55.359 --rc geninfo_all_blocks=1 00:06:55.359 --rc geninfo_unexecuted_blocks=1 00:06:55.359 00:06:55.359 ' 00:06:55.359 01:26:54 version -- app/version.sh@17 -- # get_header_version major 00:06:55.359 01:26:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.359 01:26:54 version -- app/version.sh@14 -- # cut -f2 00:06:55.359 01:26:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.359 01:26:54 version -- app/version.sh@17 -- # major=25 00:06:55.359 01:26:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:55.359 01:26:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.359 01:26:54 version -- app/version.sh@14 -- # cut -f2 00:06:55.359 01:26:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.359 01:26:54 version -- app/version.sh@18 -- # minor=1 00:06:55.359 01:26:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:55.359 01:26:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.359 01:26:54 version -- app/version.sh@14 -- # cut -f2 00:06:55.359 01:26:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.359 01:26:54 version -- app/version.sh@19 -- # patch=0 00:06:55.359 01:26:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:55.359 01:26:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.359 01:26:54 version -- app/version.sh@14 -- # cut -f2 00:06:55.359 01:26:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.619 01:26:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:55.619 01:26:54 version -- app/version.sh@22 -- # version=25.1 00:06:55.619 01:26:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.619 01:26:54 version -- app/version.sh@28 -- # version=25.1rc0 00:06:55.619 01:26:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:55.619 01:26:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.619 01:26:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:55.619 01:26:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:55.619 ************************************ 00:06:55.619 END TEST version 00:06:55.619 ************************************ 00:06:55.619 00:06:55.619 real 0m0.322s 00:06:55.619 user 0m0.178s 00:06:55.619 sys 0m0.200s 00:06:55.619 01:26:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.619 01:26:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:55.619 01:26:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:55.620 01:26:54 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:55.620 01:26:54 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:55.620 01:26:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.620 01:26:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.620 01:26:54 -- common/autotest_common.sh@10 -- # set +x 00:06:55.620 ************************************ 00:06:55.620 START TEST bdev_raid 00:06:55.620 ************************************ 00:06:55.620 01:26:54 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:55.620 * Looking for test storage... 00:06:55.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:55.620 01:26:54 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.620 01:26:54 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.620 01:26:54 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.880 01:26:54 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.880 --rc genhtml_branch_coverage=1 00:06:55.880 --rc genhtml_function_coverage=1 00:06:55.880 --rc genhtml_legend=1 00:06:55.880 --rc geninfo_all_blocks=1 00:06:55.880 --rc geninfo_unexecuted_blocks=1 00:06:55.880 00:06:55.880 ' 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.880 --rc genhtml_branch_coverage=1 00:06:55.880 --rc genhtml_function_coverage=1 00:06:55.880 --rc genhtml_legend=1 00:06:55.880 --rc geninfo_all_blocks=1 00:06:55.880 --rc geninfo_unexecuted_blocks=1 00:06:55.880 00:06:55.880 ' 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.880 --rc genhtml_branch_coverage=1 00:06:55.880 --rc genhtml_function_coverage=1 00:06:55.880 --rc genhtml_legend=1 00:06:55.880 --rc geninfo_all_blocks=1 00:06:55.880 --rc geninfo_unexecuted_blocks=1 00:06:55.880 00:06:55.880 ' 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.880 --rc genhtml_branch_coverage=1 00:06:55.880 --rc genhtml_function_coverage=1 00:06:55.880 --rc genhtml_legend=1 00:06:55.880 --rc geninfo_all_blocks=1 00:06:55.880 --rc geninfo_unexecuted_blocks=1 00:06:55.880 00:06:55.880 ' 00:06:55.880 01:26:54 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:55.880 01:26:54 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:55.880 01:26:54 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:55.880 01:26:54 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:55.880 01:26:54 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:55.880 01:26:54 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:55.880 01:26:54 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.880 01:26:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.880 ************************************ 00:06:55.880 START TEST raid1_resize_data_offset_test 00:06:55.880 ************************************ 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=72640 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 72640' 00:06:55.880 Process raid pid: 72640 00:06:55.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 72640 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 72640 ']' 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.880 01:26:54 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.880 [2024-10-09 01:26:54.697628] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:55.880 [2024-10-09 01:26:54.697838] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.139 [2024-10-09 01:26:54.831434] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.139 [2024-10-09 01:26:54.856195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.139 [2024-10-09 01:26:54.924933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.139 [2024-10-09 01:26:55.000663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.139 [2024-10-09 01:26:55.000710] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.707 malloc0 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.707 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 malloc1 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 null0 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 [2024-10-09 01:26:55.628763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:56.966 [2024-10-09 01:26:55.630856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:56.966 [2024-10-09 01:26:55.630960] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:56.966 [2024-10-09 01:26:55.631111] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.966 [2024-10-09 01:26:55.631126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:56.966 [2024-10-09 01:26:55.631399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:06:56.966 [2024-10-09 01:26:55.631550] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.966 [2024-10-09 01:26:55.631561] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:56.966 [2024-10-09 01:26:55.631707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.966 [2024-10-09 01:26:55.692779] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.966 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.225 malloc2 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.225 [2024-10-09 01:26:55.909715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:57.225 [2024-10-09 01:26:55.917126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.225 [2024-10-09 01:26:55.919346] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 72640 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 72640 ']' 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 72640 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.225 01:26:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72640 00:06:57.225 killing process with pid 72640 00:06:57.225 01:26:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.225 01:26:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.225 01:26:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72640' 00:06:57.225 01:26:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 72640 00:06:57.225 [2024-10-09 01:26:56.010952] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.225 01:26:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 72640 00:06:57.225 [2024-10-09 01:26:56.011748] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:57.225 [2024-10-09 01:26:56.011878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.225 [2024-10-09 01:26:56.011905] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:57.225 [2024-10-09 01:26:56.021094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.225 [2024-10-09 01:26:56.021512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.225 [2024-10-09 01:26:56.021552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:57.792 [2024-10-09 01:26:56.422489] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.051 01:26:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:58.051 00:06:58.051 real 0m2.178s 00:06:58.051 user 0m1.980s 00:06:58.051 sys 0m0.655s 00:06:58.051 01:26:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.051 01:26:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.051 ************************************ 00:06:58.051 END TEST raid1_resize_data_offset_test 00:06:58.051 ************************************ 00:06:58.051 01:26:56 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:58.051 01:26:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:58.051 01:26:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.051 01:26:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.051 ************************************ 00:06:58.051 START TEST raid0_resize_superblock_test 00:06:58.051 ************************************ 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=72696 00:06:58.051 Process raid pid: 72696 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 72696' 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 72696 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72696 ']' 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.051 01:26:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.322 [2024-10-09 01:26:56.951056] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:58.322 [2024-10-09 01:26:56.951194] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.322 [2024-10-09 01:26:57.088345] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:58.322 [2024-10-09 01:26:57.111626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.322 [2024-10-09 01:26:57.188061] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.595 [2024-10-09 01:26:57.264363] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.595 [2024-10-09 01:26:57.264405] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.162 malloc0 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.162 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.162 [2024-10-09 01:26:57.981101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:59.162 [2024-10-09 01:26:57.981176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.162 [2024-10-09 01:26:57.981209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:59.162 [2024-10-09 01:26:57.981221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.162 [2024-10-09 01:26:57.983760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.162 [2024-10-09 01:26:57.983830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:59.162 pt0 00:06:59.163 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.163 01:26:57 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:59.163 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.163 01:26:57 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.421 29e96935-3ba4-4969-b560-7f2c24558970 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.421 f1b9188b-4b51-4263-95de-14e330e74b19 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.421 55015c25-31e4-4cb8-a75d-9f93a8d59f75 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.421 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.421 [2024-10-09 01:26:58.189816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f1b9188b-4b51-4263-95de-14e330e74b19 is claimed 00:06:59.421 [2024-10-09 01:26:58.189905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 55015c25-31e4-4cb8-a75d-9f93a8d59f75 is claimed 00:06:59.421 [2024-10-09 01:26:58.190025] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:59.421 [2024-10-09 01:26:58.190035] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:59.421 [2024-10-09 01:26:58.190306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:06:59.421 [2024-10-09 01:26:58.190460] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:59.421 [2024-10-09 01:26:58.190473] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:59.422 [2024-10-09 01:26:58.190622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:59.422 [2024-10-09 01:26:58.302030] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.422 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.681 [2024-10-09 01:26:58.349993] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.681 [2024-10-09 01:26:58.350021] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f1b9188b-4b51-4263-95de-14e330e74b19' was resized: old size 131072, new size 204800 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.681 [2024-10-09 01:26:58.361908] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:59.681 [2024-10-09 01:26:58.361932] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '55015c25-31e4-4cb8-a75d-9f93a8d59f75' was resized: old size 131072, new size 204800 00:06:59.681 [2024-10-09 01:26:58.361957] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.681 [2024-10-09 01:26:58.478059] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.681 [2024-10-09 01:26:58.521868] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:59.681 [2024-10-09 01:26:58.521941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:59.681 [2024-10-09 01:26:58.521950] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:59.681 [2024-10-09 01:26:58.521964] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:59.681 [2024-10-09 01:26:58.522081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.681 [2024-10-09 01:26:58.522118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.681 [2024-10-09 01:26:58.522127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.681 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.681 [2024-10-09 01:26:58.533863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:59.681 [2024-10-09 01:26:58.533920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.682 [2024-10-09 01:26:58.533944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:59.682 [2024-10-09 01:26:58.533953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.682 [2024-10-09 01:26:58.536347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.682 [2024-10-09 01:26:58.536382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:59.682 [2024-10-09 01:26:58.537879] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f1b9188b-4b51-4263-95de-14e330e74b19 00:06:59.682 [2024-10-09 01:26:58.537934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev f1b9188b-4b51-4263-95de-14e330e74b19 is claimed 00:06:59.682 [2024-10-09 01:26:58.538019] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 55015c25-31e4-4cb8-a75d-9f93a8d59f75 00:06:59.682 [2024-10-09 01:26:58.538034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 55015c25-31e4-4cb8-a75d-9f93a8d59f75 is claimed 00:06:59.682 [2024-10-09 01:26:58.538143] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 55015c25-31e4-4cb8-a75d-9f93a8d59f75 (2) smaller than existing raid bdev Raid (3) 00:06:59.682 [2024-10-09 01:26:58.538160] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f1b9188b-4b51-4263-95de-14e330e74b19: File exists 00:06:59.682 [2024-10-09 01:26:58.538199] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:59.682 [2024-10-09 01:26:58.538206] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:59.682 [2024-10-09 01:26:58.538460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:06:59.682 pt0 00:06:59.682 [2024-10-09 01:26:58.538595] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:59.682 [2024-10-09 01:26:58.538615] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:59.682 [2024-10-09 01:26:58.538720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.682 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.682 [2024-10-09 01:26:58.562178] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 72696 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72696 ']' 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72696 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72696 00:06:59.941 killing process with pid 72696 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72696' 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 72696 00:06:59.941 [2024-10-09 01:26:58.644712] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:59.941 [2024-10-09 01:26:58.644794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:59.941 [2024-10-09 01:26:58.644832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:59.941 [2024-10-09 01:26:58.644843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:59.941 01:26:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 72696 00:07:00.199 [2024-10-09 01:26:58.954199] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.459 01:26:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:00.459 00:07:00.459 real 0m2.461s 00:07:00.459 user 0m2.564s 00:07:00.459 sys 0m0.673s 00:07:00.459 01:26:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.459 ************************************ 00:07:00.459 END TEST raid0_resize_superblock_test 00:07:00.459 ************************************ 00:07:00.459 01:26:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.717 01:26:59 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:00.717 01:26:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:00.718 01:26:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.718 01:26:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.718 ************************************ 00:07:00.718 START TEST raid1_resize_superblock_test 00:07:00.718 ************************************ 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=72773 00:07:00.718 Process raid pid: 72773 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 72773' 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 72773 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72773 ']' 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.718 01:26:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.718 [2024-10-09 01:26:59.479980] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:00.718 [2024-10-09 01:26:59.480119] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.977 [2024-10-09 01:26:59.616716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.977 [2024-10-09 01:26:59.644990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.977 [2024-10-09 01:26:59.717913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.977 [2024-10-09 01:26:59.795291] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.977 [2024-10-09 01:26:59.795328] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.544 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.544 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:01.544 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:01.544 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.544 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.803 malloc0 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.803 [2024-10-09 01:27:00.506201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:01.803 [2024-10-09 01:27:00.506365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.803 [2024-10-09 01:27:00.506414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:01.803 [2024-10-09 01:27:00.506443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.803 [2024-10-09 01:27:00.508905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.803 [2024-10-09 01:27:00.508977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:01.803 pt0 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.803 36a1d6e4-79a4-4751-9f32-b27b28479d6e 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.803 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 2cc6dd01-8aa8-4ee0-b0fb-0c53511367bc 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 83a96232-0173-4206-8b20-06f0ab248618 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 [2024-10-09 01:27:00.715852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2cc6dd01-8aa8-4ee0-b0fb-0c53511367bc is claimed 00:07:02.062 [2024-10-09 01:27:00.715995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 83a96232-0173-4206-8b20-06f0ab248618 is claimed 00:07:02.062 [2024-10-09 01:27:00.716133] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:02.062 [2024-10-09 01:27:00.716147] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:02.062 [2024-10-09 01:27:00.716448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:02.062 [2024-10-09 01:27:00.716627] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:02.062 [2024-10-09 01:27:00.716648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:02.062 [2024-10-09 01:27:00.716777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 [2024-10-09 01:27:00.828121] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.062 [2024-10-09 01:27:00.871979] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:02.062 [2024-10-09 01:27:00.872049] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2cc6dd01-8aa8-4ee0-b0fb-0c53511367bc' was resized: old size 131072, new size 204800 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.062 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.063 [2024-10-09 01:27:00.883926] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:02.063 [2024-10-09 01:27:00.883989] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '83a96232-0173-4206-8b20-06f0ab248618' was resized: old size 131072, new size 204800 00:07:02.063 [2024-10-09 01:27:00.884015] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:02.063 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.323 [2024-10-09 01:27:00.992088] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.323 01:27:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.323 [2024-10-09 01:27:01.035904] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:02.323 [2024-10-09 01:27:01.036027] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:02.323 [2024-10-09 01:27:01.036071] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:02.323 [2024-10-09 01:27:01.036248] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:02.323 [2024-10-09 01:27:01.036419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.323 [2024-10-09 01:27:01.036531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.323 [2024-10-09 01:27:01.036587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.323 [2024-10-09 01:27:01.047879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:02.323 [2024-10-09 01:27:01.047972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.323 [2024-10-09 01:27:01.048013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:02.323 [2024-10-09 01:27:01.048040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.323 [2024-10-09 01:27:01.050422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.323 [2024-10-09 01:27:01.050456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:02.323 pt0 00:07:02.323 [2024-10-09 01:27:01.051889] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2cc6dd01-8aa8-4ee0-b0fb-0c53511367bc 00:07:02.323 [2024-10-09 01:27:01.051945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2cc6dd01-8aa8-4ee0-b0fb-0c53511367bc is claimed 00:07:02.323 [2024-10-09 01:27:01.052029] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 83a96232-0173-4206-8b20-06f0ab248618 00:07:02.323 [2024-10-09 01:27:01.052044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 83a96232-0173-4206-8b20-06f0ab248618 is claimed 00:07:02.323 [2024-10-09 01:27:01.052180] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 83a96232-0173-4206-8b20-06f0ab248618 (2) smaller than existing raid bdev Raid (3) 00:07:02.323 [2024-10-09 01:27:01.052199] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2cc6dd01-8aa8-4ee0-b0fb-0c53511367bc: File exists 00:07:02.323 [2024-10-09 01:27:01.052242] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:02.323 [2024-10-09 01:27:01.052248] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:02.323 [2024-10-09 01:27:01.052507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:02.323 [2024-10-09 01:27:01.052636] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:02.323 [2024-10-09 01:27:01.052648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:02.323 [2024-10-09 01:27:01.052749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.323 [2024-10-09 01:27:01.076189] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 72773 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72773 ']' 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72773 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72773 00:07:02.323 killing process with pid 72773 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72773' 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 72773 00:07:02.323 [2024-10-09 01:27:01.164269] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.323 [2024-10-09 01:27:01.164336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.323 [2024-10-09 01:27:01.164379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.323 [2024-10-09 01:27:01.164391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:02.323 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 72773 00:07:02.582 [2024-10-09 01:27:01.471171] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.150 ************************************ 00:07:03.150 END TEST raid1_resize_superblock_test 00:07:03.150 ************************************ 00:07:03.150 01:27:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:03.150 00:07:03.150 real 0m2.453s 00:07:03.150 user 0m2.540s 00:07:03.150 sys 0m0.691s 00:07:03.150 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.150 01:27:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.150 01:27:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:03.150 01:27:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:03.150 01:27:01 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:03.150 01:27:01 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:03.150 01:27:01 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:03.150 01:27:01 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:03.150 01:27:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.150 01:27:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.150 01:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.150 ************************************ 00:07:03.150 START TEST raid_function_test_raid0 00:07:03.150 ************************************ 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:03.150 Process raid pid: 72853 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=72853 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72853' 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 72853 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 72853 ']' 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.150 01:27:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.150 [2024-10-09 01:27:02.018578] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:03.150 [2024-10-09 01:27:02.018805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.411 [2024-10-09 01:27:02.151797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.411 [2024-10-09 01:27:02.181539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.411 [2024-10-09 01:27:02.255039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.669 [2024-10-09 01:27:02.333342] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.669 [2024-10-09 01:27:02.333482] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.237 Base_1 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.237 Base_2 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.237 [2024-10-09 01:27:02.905029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.237 [2024-10-09 01:27:02.907216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.237 [2024-10-09 01:27:02.907317] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:04.237 [2024-10-09 01:27:02.907359] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:04.237 [2024-10-09 01:27:02.907676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:04.237 [2024-10-09 01:27:02.907822] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:04.237 [2024-10-09 01:27:02.907871] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:04.237 [2024-10-09 01:27:02.908039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.237 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:04.238 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.238 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:04.238 01:27:02 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:04.496 [2024-10-09 01:27:03.149165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:04.496 /dev/nbd0 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.496 1+0 records in 00:07:04.496 1+0 records out 00:07:04.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236913 s, 17.3 MB/s 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:04.496 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.755 { 00:07:04.755 "nbd_device": "/dev/nbd0", 00:07:04.755 "bdev_name": "raid" 00:07:04.755 } 00:07:04.755 ]' 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.755 { 00:07:04.755 "nbd_device": "/dev/nbd0", 00:07:04.755 "bdev_name": "raid" 00:07:04.755 } 00:07:04.755 ]' 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:04.755 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:04.756 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:04.756 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:04.756 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:04.756 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:04.756 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:04.756 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:04.756 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:04.756 4096+0 records in 00:07:04.756 4096+0 records out 00:07:04.756 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0357284 s, 58.7 MB/s 00:07:04.756 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:05.015 4096+0 records in 00:07:05.015 4096+0 records out 00:07:05.015 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.217872 s, 9.6 MB/s 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:05.015 128+0 records in 00:07:05.015 128+0 records out 00:07:05.015 65536 bytes (66 kB, 64 KiB) copied, 0.00124055 s, 52.8 MB/s 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:05.015 2035+0 records in 00:07:05.015 2035+0 records out 00:07:05.015 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0148619 s, 70.1 MB/s 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:05.015 456+0 records in 00:07:05.015 456+0 records out 00:07:05.015 233472 bytes (233 kB, 228 KiB) copied, 0.0040741 s, 57.3 MB/s 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.015 01:27:03 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:05.274 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.274 [2024-10-09 01:27:04.090984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.274 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.274 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.275 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.275 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.275 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.275 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:05.275 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.275 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:05.275 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.275 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 72853 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 72853 ']' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 72853 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72853 00:07:05.534 killing process with pid 72853 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72853' 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 72853 00:07:05.534 [2024-10-09 01:27:04.413228] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.534 [2024-10-09 01:27:04.413354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.534 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 72853 00:07:05.534 [2024-10-09 01:27:04.413411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.534 [2024-10-09 01:27:04.413422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:05.793 [2024-10-09 01:27:04.455280] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.054 ************************************ 00:07:06.054 END TEST raid_function_test_raid0 00:07:06.054 ************************************ 00:07:06.054 01:27:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:06.054 00:07:06.054 real 0m2.894s 00:07:06.054 user 0m3.397s 00:07:06.054 sys 0m1.058s 00:07:06.054 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.054 01:27:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:06.054 01:27:04 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:06.054 01:27:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:06.054 01:27:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.054 01:27:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.054 ************************************ 00:07:06.054 START TEST raid_function_test_concat 00:07:06.054 ************************************ 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:06.054 Process raid pid: 72970 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72970 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72970' 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72970 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72970 ']' 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.054 01:27:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:06.313 [2024-10-09 01:27:04.978608] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:06.313 [2024-10-09 01:27:04.978801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.313 [2024-10-09 01:27:05.109930] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.313 [2024-10-09 01:27:05.122220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.313 [2024-10-09 01:27:05.196224] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.572 [2024-10-09 01:27:05.273764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.572 [2024-10-09 01:27:05.273799] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.139 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.139 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:07.139 01:27:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:07.139 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.139 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.140 Base_1 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.140 Base_2 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.140 [2024-10-09 01:27:05.854864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:07.140 [2024-10-09 01:27:05.857057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:07.140 [2024-10-09 01:27:05.857164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.140 [2024-10-09 01:27:05.857200] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.140 [2024-10-09 01:27:05.857486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:07.140 [2024-10-09 01:27:05.857697] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.140 [2024-10-09 01:27:05.857742] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:07.140 [2024-10-09 01:27:05.857917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.140 01:27:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:07.399 [2024-10-09 01:27:06.102996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:07.399 /dev/nbd0 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.399 1+0 records in 00:07:07.399 1+0 records out 00:07:07.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224144 s, 18.3 MB/s 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:07.399 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:07.658 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.658 { 00:07:07.658 "nbd_device": "/dev/nbd0", 00:07:07.658 "bdev_name": "raid" 00:07:07.658 } 00:07:07.658 ]' 00:07:07.658 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.658 { 00:07:07.658 "nbd_device": "/dev/nbd0", 00:07:07.659 "bdev_name": "raid" 00:07:07.659 } 00:07:07.659 ]' 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:07.659 4096+0 records in 00:07:07.659 4096+0 records out 00:07:07.659 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0327043 s, 64.1 MB/s 00:07:07.659 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:07.918 4096+0 records in 00:07:07.918 4096+0 records out 00:07:07.918 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.217902 s, 9.6 MB/s 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:07.918 128+0 records in 00:07:07.918 128+0 records out 00:07:07.918 65536 bytes (66 kB, 64 KiB) copied, 0.00128285 s, 51.1 MB/s 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:07.918 2035+0 records in 00:07:07.918 2035+0 records out 00:07:07.918 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0146739 s, 71.0 MB/s 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:07.918 456+0 records in 00:07:07.918 456+0 records out 00:07:07.918 233472 bytes (233 kB, 228 KiB) copied, 0.00373263 s, 62.5 MB/s 00:07:07.918 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.177 01:27:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.177 [2024-10-09 01:27:07.040946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:08.177 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72970 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72970 ']' 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72970 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.436 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72970 00:07:08.696 killing process with pid 72970 00:07:08.696 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.696 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.696 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72970' 00:07:08.696 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72970 00:07:08.696 [2024-10-09 01:27:07.360242] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.696 [2024-10-09 01:27:07.360374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.696 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72970 00:07:08.696 [2024-10-09 01:27:07.360443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.696 [2024-10-09 01:27:07.360454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:08.696 [2024-10-09 01:27:07.402191] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.955 01:27:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:08.955 00:07:08.955 real 0m2.880s 00:07:08.955 user 0m3.375s 00:07:08.955 sys 0m1.051s 00:07:08.955 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.955 01:27:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:08.955 ************************************ 00:07:08.955 END TEST raid_function_test_concat 00:07:08.955 ************************************ 00:07:08.956 01:27:07 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:08.956 01:27:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.956 01:27:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.956 01:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 ************************************ 00:07:09.215 START TEST raid0_resize_test 00:07:09.215 ************************************ 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73082 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.215 Process raid pid: 73082 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73082' 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73082 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 73082 ']' 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.215 01:27:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.215 [2024-10-09 01:27:07.934247] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:09.215 [2024-10-09 01:27:07.934469] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.215 [2024-10-09 01:27:08.067506] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.215 [2024-10-09 01:27:08.096773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.474 [2024-10-09 01:27:08.172388] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.474 [2024-10-09 01:27:08.248987] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.474 [2024-10-09 01:27:08.249022] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 Base_1 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 Base_2 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 [2024-10-09 01:27:08.793560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:10.043 [2024-10-09 01:27:08.795738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:10.043 [2024-10-09 01:27:08.795831] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:10.043 [2024-10-09 01:27:08.795869] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:10.043 [2024-10-09 01:27:08.796145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:10.043 [2024-10-09 01:27:08.796293] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:10.043 [2024-10-09 01:27:08.796360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:10.043 [2024-10-09 01:27:08.796511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 [2024-10-09 01:27:08.805505] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:10.043 [2024-10-09 01:27:08.805590] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:10.043 true 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 [2024-10-09 01:27:08.821720] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 [2024-10-09 01:27:08.869513] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:10.043 [2024-10-09 01:27:08.869588] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:10.043 [2024-10-09 01:27:08.869637] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:10.043 true 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.043 [2024-10-09 01:27:08.885711] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73082 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 73082 ']' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 73082 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.043 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73082 00:07:10.303 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.303 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.303 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73082' 00:07:10.303 killing process with pid 73082 00:07:10.303 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 73082 00:07:10.303 [2024-10-09 01:27:08.957973] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.303 [2024-10-09 01:27:08.958112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.303 01:27:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 73082 00:07:10.303 [2024-10-09 01:27:08.958191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.303 [2024-10-09 01:27:08.958216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:10.303 [2024-10-09 01:27:08.960194] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.562 01:27:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:10.562 00:07:10.562 real 0m1.486s 00:07:10.562 user 0m1.544s 00:07:10.562 sys 0m0.405s 00:07:10.562 01:27:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.562 01:27:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.562 ************************************ 00:07:10.562 END TEST raid0_resize_test 00:07:10.562 ************************************ 00:07:10.562 01:27:09 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:10.562 01:27:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:10.562 01:27:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.562 01:27:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.562 ************************************ 00:07:10.562 START TEST raid1_resize_test 00:07:10.562 ************************************ 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:10.562 Process raid pid: 73138 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73138 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73138' 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73138 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 73138 ']' 00:07:10.562 01:27:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.563 01:27:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.563 01:27:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.563 01:27:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.563 01:27:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.822 [2024-10-09 01:27:09.495019] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:10.822 [2024-10-09 01:27:09.495259] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.822 [2024-10-09 01:27:09.632575] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.822 [2024-10-09 01:27:09.656642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.081 [2024-10-09 01:27:09.733028] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.081 [2024-10-09 01:27:09.810104] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.081 [2024-10-09 01:27:09.810255] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.648 Base_1 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.648 Base_2 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.648 [2024-10-09 01:27:10.347311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:11.648 [2024-10-09 01:27:10.349489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:11.648 [2024-10-09 01:27:10.349612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:11.648 [2024-10-09 01:27:10.349654] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:11.648 [2024-10-09 01:27:10.349963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:11.648 [2024-10-09 01:27:10.350107] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:11.648 [2024-10-09 01:27:10.350148] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:11.648 [2024-10-09 01:27:10.350298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.648 [2024-10-09 01:27:10.359264] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:11.648 [2024-10-09 01:27:10.359287] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:11.648 true 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.648 [2024-10-09 01:27:10.375438] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:11.648 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.649 [2024-10-09 01:27:10.423264] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:11.649 [2024-10-09 01:27:10.423329] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:11.649 [2024-10-09 01:27:10.423378] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:11.649 true 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.649 [2024-10-09 01:27:10.439462] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73138 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 73138 ']' 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 73138 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73138 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73138' 00:07:11.649 killing process with pid 73138 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 73138 00:07:11.649 [2024-10-09 01:27:10.525561] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.649 [2024-10-09 01:27:10.525730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.649 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 73138 00:07:11.649 [2024-10-09 01:27:10.526193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.649 [2024-10-09 01:27:10.526259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:11.649 [2024-10-09 01:27:10.528026] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.219 01:27:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:12.219 00:07:12.219 real 0m1.495s 00:07:12.219 user 0m1.564s 00:07:12.219 sys 0m0.410s 00:07:12.219 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.219 01:27:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.219 ************************************ 00:07:12.219 END TEST raid1_resize_test 00:07:12.219 ************************************ 00:07:12.219 01:27:10 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:12.219 01:27:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:12.219 01:27:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:12.219 01:27:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:12.219 01:27:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.219 01:27:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.219 ************************************ 00:07:12.219 START TEST raid_state_function_test 00:07:12.219 ************************************ 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:12.219 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73184 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73184' 00:07:12.220 Process raid pid: 73184 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73184 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73184 ']' 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.220 01:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.220 [2024-10-09 01:27:11.067073] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:12.220 [2024-10-09 01:27:11.067271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.479 [2024-10-09 01:27:11.204218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.479 [2024-10-09 01:27:11.232358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.479 [2024-10-09 01:27:11.306826] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.737 [2024-10-09 01:27:11.385013] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.737 [2024-10-09 01:27:11.385050] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.995 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.995 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.995 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.996 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.996 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.254 [2024-10-09 01:27:11.890408] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.254 [2024-10-09 01:27:11.890578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.254 [2024-10-09 01:27:11.890620] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.254 [2024-10-09 01:27:11.890645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.254 "name": "Existed_Raid", 00:07:13.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.254 "strip_size_kb": 64, 00:07:13.254 "state": "configuring", 00:07:13.254 "raid_level": "raid0", 00:07:13.254 "superblock": false, 00:07:13.254 "num_base_bdevs": 2, 00:07:13.254 "num_base_bdevs_discovered": 0, 00:07:13.254 "num_base_bdevs_operational": 2, 00:07:13.254 "base_bdevs_list": [ 00:07:13.254 { 00:07:13.254 "name": "BaseBdev1", 00:07:13.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.254 "is_configured": false, 00:07:13.254 "data_offset": 0, 00:07:13.254 "data_size": 0 00:07:13.254 }, 00:07:13.254 { 00:07:13.254 "name": "BaseBdev2", 00:07:13.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.254 "is_configured": false, 00:07:13.254 "data_offset": 0, 00:07:13.254 "data_size": 0 00:07:13.254 } 00:07:13.254 ] 00:07:13.254 }' 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.254 01:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 [2024-10-09 01:27:12.322447] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.513 [2024-10-09 01:27:12.322536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 [2024-10-09 01:27:12.334443] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.513 [2024-10-09 01:27:12.334526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.513 [2024-10-09 01:27:12.334557] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.513 [2024-10-09 01:27:12.334578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.513 [2024-10-09 01:27:12.361642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.513 BaseBdev1 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:13.513 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.514 [ 00:07:13.514 { 00:07:13.514 "name": "BaseBdev1", 00:07:13.514 "aliases": [ 00:07:13.514 "b7876b3b-edee-43be-b11b-bc5b86f06320" 00:07:13.514 ], 00:07:13.514 "product_name": "Malloc disk", 00:07:13.514 "block_size": 512, 00:07:13.514 "num_blocks": 65536, 00:07:13.514 "uuid": "b7876b3b-edee-43be-b11b-bc5b86f06320", 00:07:13.514 "assigned_rate_limits": { 00:07:13.514 "rw_ios_per_sec": 0, 00:07:13.514 "rw_mbytes_per_sec": 0, 00:07:13.514 "r_mbytes_per_sec": 0, 00:07:13.514 "w_mbytes_per_sec": 0 00:07:13.514 }, 00:07:13.514 "claimed": true, 00:07:13.514 "claim_type": "exclusive_write", 00:07:13.514 "zoned": false, 00:07:13.514 "supported_io_types": { 00:07:13.514 "read": true, 00:07:13.514 "write": true, 00:07:13.514 "unmap": true, 00:07:13.514 "flush": true, 00:07:13.514 "reset": true, 00:07:13.514 "nvme_admin": false, 00:07:13.514 "nvme_io": false, 00:07:13.514 "nvme_io_md": false, 00:07:13.514 "write_zeroes": true, 00:07:13.514 "zcopy": true, 00:07:13.514 "get_zone_info": false, 00:07:13.514 "zone_management": false, 00:07:13.514 "zone_append": false, 00:07:13.514 "compare": false, 00:07:13.514 "compare_and_write": false, 00:07:13.514 "abort": true, 00:07:13.514 "seek_hole": false, 00:07:13.514 "seek_data": false, 00:07:13.514 "copy": true, 00:07:13.514 "nvme_iov_md": false 00:07:13.514 }, 00:07:13.514 "memory_domains": [ 00:07:13.514 { 00:07:13.514 "dma_device_id": "system", 00:07:13.514 "dma_device_type": 1 00:07:13.514 }, 00:07:13.514 { 00:07:13.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.514 "dma_device_type": 2 00:07:13.514 } 00:07:13.514 ], 00:07:13.514 "driver_specific": {} 00:07:13.514 } 00:07:13.514 ] 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.514 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.773 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.773 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.773 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.773 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.773 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.773 "name": "Existed_Raid", 00:07:13.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.773 "strip_size_kb": 64, 00:07:13.773 "state": "configuring", 00:07:13.773 "raid_level": "raid0", 00:07:13.773 "superblock": false, 00:07:13.773 "num_base_bdevs": 2, 00:07:13.773 "num_base_bdevs_discovered": 1, 00:07:13.773 "num_base_bdevs_operational": 2, 00:07:13.773 "base_bdevs_list": [ 00:07:13.773 { 00:07:13.773 "name": "BaseBdev1", 00:07:13.773 "uuid": "b7876b3b-edee-43be-b11b-bc5b86f06320", 00:07:13.773 "is_configured": true, 00:07:13.773 "data_offset": 0, 00:07:13.773 "data_size": 65536 00:07:13.773 }, 00:07:13.773 { 00:07:13.773 "name": "BaseBdev2", 00:07:13.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.773 "is_configured": false, 00:07:13.773 "data_offset": 0, 00:07:13.773 "data_size": 0 00:07:13.773 } 00:07:13.773 ] 00:07:13.773 }' 00:07:13.773 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.773 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.041 [2024-10-09 01:27:12.753769] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.041 [2024-10-09 01:27:12.753874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.041 [2024-10-09 01:27:12.765782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.041 [2024-10-09 01:27:12.767890] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.041 [2024-10-09 01:27:12.767958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.041 "name": "Existed_Raid", 00:07:14.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.041 "strip_size_kb": 64, 00:07:14.041 "state": "configuring", 00:07:14.041 "raid_level": "raid0", 00:07:14.041 "superblock": false, 00:07:14.041 "num_base_bdevs": 2, 00:07:14.041 "num_base_bdevs_discovered": 1, 00:07:14.041 "num_base_bdevs_operational": 2, 00:07:14.041 "base_bdevs_list": [ 00:07:14.041 { 00:07:14.041 "name": "BaseBdev1", 00:07:14.041 "uuid": "b7876b3b-edee-43be-b11b-bc5b86f06320", 00:07:14.041 "is_configured": true, 00:07:14.041 "data_offset": 0, 00:07:14.041 "data_size": 65536 00:07:14.041 }, 00:07:14.041 { 00:07:14.041 "name": "BaseBdev2", 00:07:14.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.041 "is_configured": false, 00:07:14.041 "data_offset": 0, 00:07:14.041 "data_size": 0 00:07:14.041 } 00:07:14.041 ] 00:07:14.041 }' 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.041 01:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.324 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:14.324 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.324 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.602 [2024-10-09 01:27:13.225717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.602 [2024-10-09 01:27:13.225861] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:14.602 [2024-10-09 01:27:13.225898] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:14.602 [2024-10-09 01:27:13.226343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:14.602 BaseBdev2 00:07:14.602 [2024-10-09 01:27:13.226606] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:14.602 [2024-10-09 01:27:13.226632] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:14.602 [2024-10-09 01:27:13.226944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.602 [ 00:07:14.602 { 00:07:14.602 "name": "BaseBdev2", 00:07:14.602 "aliases": [ 00:07:14.602 "9deecf20-2617-4748-bc16-307b8fa68a55" 00:07:14.602 ], 00:07:14.602 "product_name": "Malloc disk", 00:07:14.602 "block_size": 512, 00:07:14.602 "num_blocks": 65536, 00:07:14.602 "uuid": "9deecf20-2617-4748-bc16-307b8fa68a55", 00:07:14.602 "assigned_rate_limits": { 00:07:14.602 "rw_ios_per_sec": 0, 00:07:14.602 "rw_mbytes_per_sec": 0, 00:07:14.602 "r_mbytes_per_sec": 0, 00:07:14.602 "w_mbytes_per_sec": 0 00:07:14.602 }, 00:07:14.602 "claimed": true, 00:07:14.602 "claim_type": "exclusive_write", 00:07:14.602 "zoned": false, 00:07:14.602 "supported_io_types": { 00:07:14.602 "read": true, 00:07:14.602 "write": true, 00:07:14.602 "unmap": true, 00:07:14.602 "flush": true, 00:07:14.602 "reset": true, 00:07:14.602 "nvme_admin": false, 00:07:14.602 "nvme_io": false, 00:07:14.602 "nvme_io_md": false, 00:07:14.602 "write_zeroes": true, 00:07:14.602 "zcopy": true, 00:07:14.602 "get_zone_info": false, 00:07:14.602 "zone_management": false, 00:07:14.602 "zone_append": false, 00:07:14.602 "compare": false, 00:07:14.602 "compare_and_write": false, 00:07:14.602 "abort": true, 00:07:14.602 "seek_hole": false, 00:07:14.602 "seek_data": false, 00:07:14.602 "copy": true, 00:07:14.602 "nvme_iov_md": false 00:07:14.602 }, 00:07:14.602 "memory_domains": [ 00:07:14.602 { 00:07:14.602 "dma_device_id": "system", 00:07:14.602 "dma_device_type": 1 00:07:14.602 }, 00:07:14.602 { 00:07:14.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.602 "dma_device_type": 2 00:07:14.602 } 00:07:14.602 ], 00:07:14.602 "driver_specific": {} 00:07:14.602 } 00:07:14.602 ] 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.602 "name": "Existed_Raid", 00:07:14.602 "uuid": "0e539dc6-77f3-442a-8caf-0b2207dc5eb9", 00:07:14.602 "strip_size_kb": 64, 00:07:14.602 "state": "online", 00:07:14.602 "raid_level": "raid0", 00:07:14.602 "superblock": false, 00:07:14.602 "num_base_bdevs": 2, 00:07:14.602 "num_base_bdevs_discovered": 2, 00:07:14.602 "num_base_bdevs_operational": 2, 00:07:14.602 "base_bdevs_list": [ 00:07:14.602 { 00:07:14.602 "name": "BaseBdev1", 00:07:14.602 "uuid": "b7876b3b-edee-43be-b11b-bc5b86f06320", 00:07:14.602 "is_configured": true, 00:07:14.602 "data_offset": 0, 00:07:14.602 "data_size": 65536 00:07:14.602 }, 00:07:14.602 { 00:07:14.602 "name": "BaseBdev2", 00:07:14.602 "uuid": "9deecf20-2617-4748-bc16-307b8fa68a55", 00:07:14.602 "is_configured": true, 00:07:14.602 "data_offset": 0, 00:07:14.602 "data_size": 65536 00:07:14.602 } 00:07:14.602 ] 00:07:14.602 }' 00:07:14.602 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.603 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.862 [2024-10-09 01:27:13.686107] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.862 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:14.862 "name": "Existed_Raid", 00:07:14.862 "aliases": [ 00:07:14.862 "0e539dc6-77f3-442a-8caf-0b2207dc5eb9" 00:07:14.862 ], 00:07:14.862 "product_name": "Raid Volume", 00:07:14.862 "block_size": 512, 00:07:14.862 "num_blocks": 131072, 00:07:14.862 "uuid": "0e539dc6-77f3-442a-8caf-0b2207dc5eb9", 00:07:14.862 "assigned_rate_limits": { 00:07:14.862 "rw_ios_per_sec": 0, 00:07:14.862 "rw_mbytes_per_sec": 0, 00:07:14.862 "r_mbytes_per_sec": 0, 00:07:14.862 "w_mbytes_per_sec": 0 00:07:14.862 }, 00:07:14.862 "claimed": false, 00:07:14.862 "zoned": false, 00:07:14.862 "supported_io_types": { 00:07:14.862 "read": true, 00:07:14.862 "write": true, 00:07:14.862 "unmap": true, 00:07:14.862 "flush": true, 00:07:14.862 "reset": true, 00:07:14.862 "nvme_admin": false, 00:07:14.862 "nvme_io": false, 00:07:14.862 "nvme_io_md": false, 00:07:14.862 "write_zeroes": true, 00:07:14.862 "zcopy": false, 00:07:14.862 "get_zone_info": false, 00:07:14.862 "zone_management": false, 00:07:14.862 "zone_append": false, 00:07:14.862 "compare": false, 00:07:14.862 "compare_and_write": false, 00:07:14.862 "abort": false, 00:07:14.862 "seek_hole": false, 00:07:14.862 "seek_data": false, 00:07:14.862 "copy": false, 00:07:14.862 "nvme_iov_md": false 00:07:14.862 }, 00:07:14.862 "memory_domains": [ 00:07:14.862 { 00:07:14.862 "dma_device_id": "system", 00:07:14.862 "dma_device_type": 1 00:07:14.862 }, 00:07:14.862 { 00:07:14.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.862 "dma_device_type": 2 00:07:14.862 }, 00:07:14.862 { 00:07:14.863 "dma_device_id": "system", 00:07:14.863 "dma_device_type": 1 00:07:14.863 }, 00:07:14.863 { 00:07:14.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.863 "dma_device_type": 2 00:07:14.863 } 00:07:14.863 ], 00:07:14.863 "driver_specific": { 00:07:14.863 "raid": { 00:07:14.863 "uuid": "0e539dc6-77f3-442a-8caf-0b2207dc5eb9", 00:07:14.863 "strip_size_kb": 64, 00:07:14.863 "state": "online", 00:07:14.863 "raid_level": "raid0", 00:07:14.863 "superblock": false, 00:07:14.863 "num_base_bdevs": 2, 00:07:14.863 "num_base_bdevs_discovered": 2, 00:07:14.863 "num_base_bdevs_operational": 2, 00:07:14.863 "base_bdevs_list": [ 00:07:14.863 { 00:07:14.863 "name": "BaseBdev1", 00:07:14.863 "uuid": "b7876b3b-edee-43be-b11b-bc5b86f06320", 00:07:14.863 "is_configured": true, 00:07:14.863 "data_offset": 0, 00:07:14.863 "data_size": 65536 00:07:14.863 }, 00:07:14.863 { 00:07:14.863 "name": "BaseBdev2", 00:07:14.863 "uuid": "9deecf20-2617-4748-bc16-307b8fa68a55", 00:07:14.863 "is_configured": true, 00:07:14.863 "data_offset": 0, 00:07:14.863 "data_size": 65536 00:07:14.863 } 00:07:14.863 ] 00:07:14.863 } 00:07:14.863 } 00:07:14.863 }' 00:07:14.863 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:15.122 BaseBdev2' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.122 [2024-10-09 01:27:13.885976] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:15.122 [2024-10-09 01:27:13.886043] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.122 [2024-10-09 01:27:13.886122] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.122 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.122 "name": "Existed_Raid", 00:07:15.122 "uuid": "0e539dc6-77f3-442a-8caf-0b2207dc5eb9", 00:07:15.122 "strip_size_kb": 64, 00:07:15.122 "state": "offline", 00:07:15.122 "raid_level": "raid0", 00:07:15.122 "superblock": false, 00:07:15.122 "num_base_bdevs": 2, 00:07:15.122 "num_base_bdevs_discovered": 1, 00:07:15.122 "num_base_bdevs_operational": 1, 00:07:15.122 "base_bdevs_list": [ 00:07:15.122 { 00:07:15.122 "name": null, 00:07:15.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.122 "is_configured": false, 00:07:15.122 "data_offset": 0, 00:07:15.122 "data_size": 65536 00:07:15.122 }, 00:07:15.122 { 00:07:15.122 "name": "BaseBdev2", 00:07:15.122 "uuid": "9deecf20-2617-4748-bc16-307b8fa68a55", 00:07:15.122 "is_configured": true, 00:07:15.122 "data_offset": 0, 00:07:15.123 "data_size": 65536 00:07:15.123 } 00:07:15.123 ] 00:07:15.123 }' 00:07:15.123 01:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.123 01:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.691 [2024-10-09 01:27:14.330623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:15.691 [2024-10-09 01:27:14.330727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73184 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73184 ']' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73184 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73184 00:07:15.691 killing process with pid 73184 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73184' 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73184 00:07:15.691 [2024-10-09 01:27:14.447269] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:15.691 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73184 00:07:15.691 [2024-10-09 01:27:14.448837] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.952 01:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:15.952 00:07:15.952 real 0m3.847s 00:07:15.952 user 0m5.799s 00:07:15.952 sys 0m0.841s 00:07:15.952 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.952 ************************************ 00:07:15.952 END TEST raid_state_function_test 00:07:15.952 ************************************ 00:07:15.952 01:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.213 01:27:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:16.213 01:27:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.213 01:27:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.213 01:27:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.213 ************************************ 00:07:16.213 START TEST raid_state_function_test_sb 00:07:16.213 ************************************ 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73426 00:07:16.213 Process raid pid: 73426 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73426' 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73426 00:07:16.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73426 ']' 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.213 01:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.213 [2024-10-09 01:27:14.983776] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:16.213 [2024-10-09 01:27:14.984004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.473 [2024-10-09 01:27:15.117108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.473 [2024-10-09 01:27:15.145732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.473 [2024-10-09 01:27:15.216962] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.473 [2024-10-09 01:27:15.293857] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.473 [2024-10-09 01:27:15.293905] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.041 [2024-10-09 01:27:15.806826] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.041 [2024-10-09 01:27:15.806944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.041 [2024-10-09 01:27:15.806992] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.041 [2024-10-09 01:27:15.807026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.041 "name": "Existed_Raid", 00:07:17.041 "uuid": "ea521c55-b9bd-46ef-8234-aa06f6721aeb", 00:07:17.041 "strip_size_kb": 64, 00:07:17.041 "state": "configuring", 00:07:17.041 "raid_level": "raid0", 00:07:17.041 "superblock": true, 00:07:17.041 "num_base_bdevs": 2, 00:07:17.041 "num_base_bdevs_discovered": 0, 00:07:17.041 "num_base_bdevs_operational": 2, 00:07:17.041 "base_bdevs_list": [ 00:07:17.041 { 00:07:17.041 "name": "BaseBdev1", 00:07:17.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.041 "is_configured": false, 00:07:17.041 "data_offset": 0, 00:07:17.041 "data_size": 0 00:07:17.041 }, 00:07:17.041 { 00:07:17.041 "name": "BaseBdev2", 00:07:17.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.041 "is_configured": false, 00:07:17.041 "data_offset": 0, 00:07:17.041 "data_size": 0 00:07:17.041 } 00:07:17.041 ] 00:07:17.041 }' 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.041 01:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.607 [2024-10-09 01:27:16.242813] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.607 [2024-10-09 01:27:16.242854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.607 [2024-10-09 01:27:16.254828] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.607 [2024-10-09 01:27:16.254863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.607 [2024-10-09 01:27:16.254874] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.607 [2024-10-09 01:27:16.254881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.607 [2024-10-09 01:27:16.281578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.607 BaseBdev1 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.607 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.608 [ 00:07:17.608 { 00:07:17.608 "name": "BaseBdev1", 00:07:17.608 "aliases": [ 00:07:17.608 "8a93280c-4c00-43f1-bb8b-3397b0f70283" 00:07:17.608 ], 00:07:17.608 "product_name": "Malloc disk", 00:07:17.608 "block_size": 512, 00:07:17.608 "num_blocks": 65536, 00:07:17.608 "uuid": "8a93280c-4c00-43f1-bb8b-3397b0f70283", 00:07:17.608 "assigned_rate_limits": { 00:07:17.608 "rw_ios_per_sec": 0, 00:07:17.608 "rw_mbytes_per_sec": 0, 00:07:17.608 "r_mbytes_per_sec": 0, 00:07:17.608 "w_mbytes_per_sec": 0 00:07:17.608 }, 00:07:17.608 "claimed": true, 00:07:17.608 "claim_type": "exclusive_write", 00:07:17.608 "zoned": false, 00:07:17.608 "supported_io_types": { 00:07:17.608 "read": true, 00:07:17.608 "write": true, 00:07:17.608 "unmap": true, 00:07:17.608 "flush": true, 00:07:17.608 "reset": true, 00:07:17.608 "nvme_admin": false, 00:07:17.608 "nvme_io": false, 00:07:17.608 "nvme_io_md": false, 00:07:17.608 "write_zeroes": true, 00:07:17.608 "zcopy": true, 00:07:17.608 "get_zone_info": false, 00:07:17.608 "zone_management": false, 00:07:17.608 "zone_append": false, 00:07:17.608 "compare": false, 00:07:17.608 "compare_and_write": false, 00:07:17.608 "abort": true, 00:07:17.608 "seek_hole": false, 00:07:17.608 "seek_data": false, 00:07:17.608 "copy": true, 00:07:17.608 "nvme_iov_md": false 00:07:17.608 }, 00:07:17.608 "memory_domains": [ 00:07:17.608 { 00:07:17.608 "dma_device_id": "system", 00:07:17.608 "dma_device_type": 1 00:07:17.608 }, 00:07:17.608 { 00:07:17.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.608 "dma_device_type": 2 00:07:17.608 } 00:07:17.608 ], 00:07:17.608 "driver_specific": {} 00:07:17.608 } 00:07:17.608 ] 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.608 "name": "Existed_Raid", 00:07:17.608 "uuid": "7714f7e5-63cd-4013-b89f-7f32ec01b106", 00:07:17.608 "strip_size_kb": 64, 00:07:17.608 "state": "configuring", 00:07:17.608 "raid_level": "raid0", 00:07:17.608 "superblock": true, 00:07:17.608 "num_base_bdevs": 2, 00:07:17.608 "num_base_bdevs_discovered": 1, 00:07:17.608 "num_base_bdevs_operational": 2, 00:07:17.608 "base_bdevs_list": [ 00:07:17.608 { 00:07:17.608 "name": "BaseBdev1", 00:07:17.608 "uuid": "8a93280c-4c00-43f1-bb8b-3397b0f70283", 00:07:17.608 "is_configured": true, 00:07:17.608 "data_offset": 2048, 00:07:17.608 "data_size": 63488 00:07:17.608 }, 00:07:17.608 { 00:07:17.608 "name": "BaseBdev2", 00:07:17.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.608 "is_configured": false, 00:07:17.608 "data_offset": 0, 00:07:17.608 "data_size": 0 00:07:17.608 } 00:07:17.608 ] 00:07:17.608 }' 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.608 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.867 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.867 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.867 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.867 [2024-10-09 01:27:16.749744] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.867 [2024-10-09 01:27:16.749797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:17.867 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.867 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.867 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.867 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.125 [2024-10-09 01:27:16.761779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.125 [2024-10-09 01:27:16.763907] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.125 [2024-10-09 01:27:16.763943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.125 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.125 "name": "Existed_Raid", 00:07:18.125 "uuid": "24b3f575-6554-404c-9031-7422104fe0a7", 00:07:18.125 "strip_size_kb": 64, 00:07:18.125 "state": "configuring", 00:07:18.126 "raid_level": "raid0", 00:07:18.126 "superblock": true, 00:07:18.126 "num_base_bdevs": 2, 00:07:18.126 "num_base_bdevs_discovered": 1, 00:07:18.126 "num_base_bdevs_operational": 2, 00:07:18.126 "base_bdevs_list": [ 00:07:18.126 { 00:07:18.126 "name": "BaseBdev1", 00:07:18.126 "uuid": "8a93280c-4c00-43f1-bb8b-3397b0f70283", 00:07:18.126 "is_configured": true, 00:07:18.126 "data_offset": 2048, 00:07:18.126 "data_size": 63488 00:07:18.126 }, 00:07:18.126 { 00:07:18.126 "name": "BaseBdev2", 00:07:18.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.126 "is_configured": false, 00:07:18.126 "data_offset": 0, 00:07:18.126 "data_size": 0 00:07:18.126 } 00:07:18.126 ] 00:07:18.126 }' 00:07:18.126 01:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.126 01:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.385 [2024-10-09 01:27:17.197715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.385 [2024-10-09 01:27:17.197938] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:18.385 [2024-10-09 01:27:17.197959] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.385 [2024-10-09 01:27:17.198309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:18.385 [2024-10-09 01:27:17.198490] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:18.385 [2024-10-09 01:27:17.198508] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:18.385 [2024-10-09 01:27:17.198662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.385 BaseBdev2 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.385 [ 00:07:18.385 { 00:07:18.385 "name": "BaseBdev2", 00:07:18.385 "aliases": [ 00:07:18.385 "59856d74-10e2-4a61-92b7-f408db3bbeee" 00:07:18.385 ], 00:07:18.385 "product_name": "Malloc disk", 00:07:18.385 "block_size": 512, 00:07:18.385 "num_blocks": 65536, 00:07:18.385 "uuid": "59856d74-10e2-4a61-92b7-f408db3bbeee", 00:07:18.385 "assigned_rate_limits": { 00:07:18.385 "rw_ios_per_sec": 0, 00:07:18.385 "rw_mbytes_per_sec": 0, 00:07:18.385 "r_mbytes_per_sec": 0, 00:07:18.385 "w_mbytes_per_sec": 0 00:07:18.385 }, 00:07:18.385 "claimed": true, 00:07:18.385 "claim_type": "exclusive_write", 00:07:18.385 "zoned": false, 00:07:18.385 "supported_io_types": { 00:07:18.385 "read": true, 00:07:18.385 "write": true, 00:07:18.385 "unmap": true, 00:07:18.385 "flush": true, 00:07:18.385 "reset": true, 00:07:18.385 "nvme_admin": false, 00:07:18.385 "nvme_io": false, 00:07:18.385 "nvme_io_md": false, 00:07:18.385 "write_zeroes": true, 00:07:18.385 "zcopy": true, 00:07:18.385 "get_zone_info": false, 00:07:18.385 "zone_management": false, 00:07:18.385 "zone_append": false, 00:07:18.385 "compare": false, 00:07:18.385 "compare_and_write": false, 00:07:18.385 "abort": true, 00:07:18.385 "seek_hole": false, 00:07:18.385 "seek_data": false, 00:07:18.385 "copy": true, 00:07:18.385 "nvme_iov_md": false 00:07:18.385 }, 00:07:18.385 "memory_domains": [ 00:07:18.385 { 00:07:18.385 "dma_device_id": "system", 00:07:18.385 "dma_device_type": 1 00:07:18.385 }, 00:07:18.385 { 00:07:18.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.385 "dma_device_type": 2 00:07:18.385 } 00:07:18.385 ], 00:07:18.385 "driver_specific": {} 00:07:18.385 } 00:07:18.385 ] 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.385 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.644 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.644 "name": "Existed_Raid", 00:07:18.644 "uuid": "24b3f575-6554-404c-9031-7422104fe0a7", 00:07:18.644 "strip_size_kb": 64, 00:07:18.644 "state": "online", 00:07:18.644 "raid_level": "raid0", 00:07:18.644 "superblock": true, 00:07:18.644 "num_base_bdevs": 2, 00:07:18.644 "num_base_bdevs_discovered": 2, 00:07:18.644 "num_base_bdevs_operational": 2, 00:07:18.644 "base_bdevs_list": [ 00:07:18.644 { 00:07:18.644 "name": "BaseBdev1", 00:07:18.644 "uuid": "8a93280c-4c00-43f1-bb8b-3397b0f70283", 00:07:18.644 "is_configured": true, 00:07:18.644 "data_offset": 2048, 00:07:18.644 "data_size": 63488 00:07:18.644 }, 00:07:18.644 { 00:07:18.644 "name": "BaseBdev2", 00:07:18.644 "uuid": "59856d74-10e2-4a61-92b7-f408db3bbeee", 00:07:18.644 "is_configured": true, 00:07:18.644 "data_offset": 2048, 00:07:18.644 "data_size": 63488 00:07:18.644 } 00:07:18.644 ] 00:07:18.644 }' 00:07:18.644 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.644 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.902 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.903 [2024-10-09 01:27:17.702139] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.903 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.903 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:18.903 "name": "Existed_Raid", 00:07:18.903 "aliases": [ 00:07:18.903 "24b3f575-6554-404c-9031-7422104fe0a7" 00:07:18.903 ], 00:07:18.903 "product_name": "Raid Volume", 00:07:18.903 "block_size": 512, 00:07:18.903 "num_blocks": 126976, 00:07:18.903 "uuid": "24b3f575-6554-404c-9031-7422104fe0a7", 00:07:18.903 "assigned_rate_limits": { 00:07:18.903 "rw_ios_per_sec": 0, 00:07:18.903 "rw_mbytes_per_sec": 0, 00:07:18.903 "r_mbytes_per_sec": 0, 00:07:18.903 "w_mbytes_per_sec": 0 00:07:18.903 }, 00:07:18.903 "claimed": false, 00:07:18.903 "zoned": false, 00:07:18.903 "supported_io_types": { 00:07:18.903 "read": true, 00:07:18.903 "write": true, 00:07:18.903 "unmap": true, 00:07:18.903 "flush": true, 00:07:18.903 "reset": true, 00:07:18.903 "nvme_admin": false, 00:07:18.903 "nvme_io": false, 00:07:18.903 "nvme_io_md": false, 00:07:18.903 "write_zeroes": true, 00:07:18.903 "zcopy": false, 00:07:18.903 "get_zone_info": false, 00:07:18.903 "zone_management": false, 00:07:18.903 "zone_append": false, 00:07:18.903 "compare": false, 00:07:18.903 "compare_and_write": false, 00:07:18.903 "abort": false, 00:07:18.903 "seek_hole": false, 00:07:18.903 "seek_data": false, 00:07:18.903 "copy": false, 00:07:18.903 "nvme_iov_md": false 00:07:18.903 }, 00:07:18.903 "memory_domains": [ 00:07:18.903 { 00:07:18.903 "dma_device_id": "system", 00:07:18.903 "dma_device_type": 1 00:07:18.903 }, 00:07:18.903 { 00:07:18.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.903 "dma_device_type": 2 00:07:18.903 }, 00:07:18.903 { 00:07:18.903 "dma_device_id": "system", 00:07:18.903 "dma_device_type": 1 00:07:18.903 }, 00:07:18.903 { 00:07:18.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.903 "dma_device_type": 2 00:07:18.903 } 00:07:18.903 ], 00:07:18.903 "driver_specific": { 00:07:18.903 "raid": { 00:07:18.903 "uuid": "24b3f575-6554-404c-9031-7422104fe0a7", 00:07:18.903 "strip_size_kb": 64, 00:07:18.903 "state": "online", 00:07:18.903 "raid_level": "raid0", 00:07:18.903 "superblock": true, 00:07:18.903 "num_base_bdevs": 2, 00:07:18.903 "num_base_bdevs_discovered": 2, 00:07:18.903 "num_base_bdevs_operational": 2, 00:07:18.903 "base_bdevs_list": [ 00:07:18.903 { 00:07:18.903 "name": "BaseBdev1", 00:07:18.903 "uuid": "8a93280c-4c00-43f1-bb8b-3397b0f70283", 00:07:18.903 "is_configured": true, 00:07:18.903 "data_offset": 2048, 00:07:18.903 "data_size": 63488 00:07:18.903 }, 00:07:18.903 { 00:07:18.903 "name": "BaseBdev2", 00:07:18.903 "uuid": "59856d74-10e2-4a61-92b7-f408db3bbeee", 00:07:18.903 "is_configured": true, 00:07:18.903 "data_offset": 2048, 00:07:18.903 "data_size": 63488 00:07:18.903 } 00:07:18.903 ] 00:07:18.903 } 00:07:18.903 } 00:07:18.903 }' 00:07:18.903 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:18.903 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:18.903 BaseBdev2' 00:07:18.903 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.162 [2024-10-09 01:27:17.918042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:19.162 [2024-10-09 01:27:17.918079] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.162 [2024-10-09 01:27:17.918134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.162 "name": "Existed_Raid", 00:07:19.162 "uuid": "24b3f575-6554-404c-9031-7422104fe0a7", 00:07:19.162 "strip_size_kb": 64, 00:07:19.162 "state": "offline", 00:07:19.162 "raid_level": "raid0", 00:07:19.162 "superblock": true, 00:07:19.162 "num_base_bdevs": 2, 00:07:19.162 "num_base_bdevs_discovered": 1, 00:07:19.162 "num_base_bdevs_operational": 1, 00:07:19.162 "base_bdevs_list": [ 00:07:19.162 { 00:07:19.162 "name": null, 00:07:19.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.162 "is_configured": false, 00:07:19.162 "data_offset": 0, 00:07:19.162 "data_size": 63488 00:07:19.162 }, 00:07:19.162 { 00:07:19.162 "name": "BaseBdev2", 00:07:19.162 "uuid": "59856d74-10e2-4a61-92b7-f408db3bbeee", 00:07:19.162 "is_configured": true, 00:07:19.162 "data_offset": 2048, 00:07:19.162 "data_size": 63488 00:07:19.162 } 00:07:19.162 ] 00:07:19.162 }' 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.162 01:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.730 [2024-10-09 01:27:18.390580] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:19.730 [2024-10-09 01:27:18.390661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73426 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73426 ']' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73426 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73426 00:07:19.730 killing process with pid 73426 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73426' 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73426 00:07:19.730 [2024-10-09 01:27:18.496146] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.730 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73426 00:07:19.730 [2024-10-09 01:27:18.497721] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.990 01:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:19.990 00:07:19.990 real 0m3.978s 00:07:19.990 user 0m6.010s 00:07:19.990 sys 0m0.884s 00:07:19.990 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.990 01:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.990 ************************************ 00:07:19.990 END TEST raid_state_function_test_sb 00:07:19.990 ************************************ 00:07:20.250 01:27:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:20.250 01:27:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:20.251 01:27:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.251 01:27:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.251 ************************************ 00:07:20.251 START TEST raid_superblock_test 00:07:20.251 ************************************ 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73667 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73667 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73667 ']' 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.251 01:27:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.251 [2024-10-09 01:27:19.028695] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:20.251 [2024-10-09 01:27:19.029172] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73667 ] 00:07:20.510 [2024-10-09 01:27:19.159668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.510 [2024-10-09 01:27:19.188371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.510 [2024-10-09 01:27:19.256743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.510 [2024-10-09 01:27:19.332499] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.510 [2024-10-09 01:27:19.332547] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.076 malloc1 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.076 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.076 [2024-10-09 01:27:19.887363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:21.076 [2024-10-09 01:27:19.887432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.076 [2024-10-09 01:27:19.887453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:21.077 [2024-10-09 01:27:19.887465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.077 [2024-10-09 01:27:19.889946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.077 [2024-10-09 01:27:19.889976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:21.077 pt1 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.077 malloc2 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.077 [2024-10-09 01:27:19.937846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:21.077 [2024-10-09 01:27:19.937935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.077 [2024-10-09 01:27:19.937972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:21.077 [2024-10-09 01:27:19.937992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.077 [2024-10-09 01:27:19.942446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.077 [2024-10-09 01:27:19.942488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:21.077 pt2 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.077 [2024-10-09 01:27:19.950787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:21.077 [2024-10-09 01:27:19.953329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:21.077 [2024-10-09 01:27:19.953483] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:21.077 [2024-10-09 01:27:19.953499] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.077 [2024-10-09 01:27:19.953795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:21.077 [2024-10-09 01:27:19.953943] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:21.077 [2024-10-09 01:27:19.953962] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:21.077 [2024-10-09 01:27:19.954099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.077 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.337 01:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.337 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.337 "name": "raid_bdev1", 00:07:21.337 "uuid": "f61b2fe2-3c82-4804-8b37-8f2f46aa6327", 00:07:21.337 "strip_size_kb": 64, 00:07:21.337 "state": "online", 00:07:21.337 "raid_level": "raid0", 00:07:21.337 "superblock": true, 00:07:21.337 "num_base_bdevs": 2, 00:07:21.337 "num_base_bdevs_discovered": 2, 00:07:21.337 "num_base_bdevs_operational": 2, 00:07:21.337 "base_bdevs_list": [ 00:07:21.337 { 00:07:21.337 "name": "pt1", 00:07:21.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.337 "is_configured": true, 00:07:21.337 "data_offset": 2048, 00:07:21.337 "data_size": 63488 00:07:21.337 }, 00:07:21.337 { 00:07:21.337 "name": "pt2", 00:07:21.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.337 "is_configured": true, 00:07:21.337 "data_offset": 2048, 00:07:21.337 "data_size": 63488 00:07:21.337 } 00:07:21.337 ] 00:07:21.337 }' 00:07:21.337 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.337 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.595 [2024-10-09 01:27:20.403068] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.595 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.595 "name": "raid_bdev1", 00:07:21.595 "aliases": [ 00:07:21.595 "f61b2fe2-3c82-4804-8b37-8f2f46aa6327" 00:07:21.595 ], 00:07:21.595 "product_name": "Raid Volume", 00:07:21.595 "block_size": 512, 00:07:21.595 "num_blocks": 126976, 00:07:21.595 "uuid": "f61b2fe2-3c82-4804-8b37-8f2f46aa6327", 00:07:21.595 "assigned_rate_limits": { 00:07:21.595 "rw_ios_per_sec": 0, 00:07:21.595 "rw_mbytes_per_sec": 0, 00:07:21.595 "r_mbytes_per_sec": 0, 00:07:21.595 "w_mbytes_per_sec": 0 00:07:21.595 }, 00:07:21.595 "claimed": false, 00:07:21.595 "zoned": false, 00:07:21.595 "supported_io_types": { 00:07:21.595 "read": true, 00:07:21.595 "write": true, 00:07:21.595 "unmap": true, 00:07:21.596 "flush": true, 00:07:21.596 "reset": true, 00:07:21.596 "nvme_admin": false, 00:07:21.596 "nvme_io": false, 00:07:21.596 "nvme_io_md": false, 00:07:21.596 "write_zeroes": true, 00:07:21.596 "zcopy": false, 00:07:21.596 "get_zone_info": false, 00:07:21.596 "zone_management": false, 00:07:21.596 "zone_append": false, 00:07:21.596 "compare": false, 00:07:21.596 "compare_and_write": false, 00:07:21.596 "abort": false, 00:07:21.596 "seek_hole": false, 00:07:21.596 "seek_data": false, 00:07:21.596 "copy": false, 00:07:21.596 "nvme_iov_md": false 00:07:21.596 }, 00:07:21.596 "memory_domains": [ 00:07:21.596 { 00:07:21.596 "dma_device_id": "system", 00:07:21.596 "dma_device_type": 1 00:07:21.596 }, 00:07:21.596 { 00:07:21.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.596 "dma_device_type": 2 00:07:21.596 }, 00:07:21.596 { 00:07:21.596 "dma_device_id": "system", 00:07:21.596 "dma_device_type": 1 00:07:21.596 }, 00:07:21.596 { 00:07:21.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.596 "dma_device_type": 2 00:07:21.596 } 00:07:21.596 ], 00:07:21.596 "driver_specific": { 00:07:21.596 "raid": { 00:07:21.596 "uuid": "f61b2fe2-3c82-4804-8b37-8f2f46aa6327", 00:07:21.596 "strip_size_kb": 64, 00:07:21.596 "state": "online", 00:07:21.596 "raid_level": "raid0", 00:07:21.596 "superblock": true, 00:07:21.596 "num_base_bdevs": 2, 00:07:21.596 "num_base_bdevs_discovered": 2, 00:07:21.596 "num_base_bdevs_operational": 2, 00:07:21.596 "base_bdevs_list": [ 00:07:21.596 { 00:07:21.596 "name": "pt1", 00:07:21.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:21.596 "is_configured": true, 00:07:21.596 "data_offset": 2048, 00:07:21.596 "data_size": 63488 00:07:21.596 }, 00:07:21.596 { 00:07:21.596 "name": "pt2", 00:07:21.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:21.596 "is_configured": true, 00:07:21.596 "data_offset": 2048, 00:07:21.596 "data_size": 63488 00:07:21.596 } 00:07:21.596 ] 00:07:21.596 } 00:07:21.596 } 00:07:21.596 }' 00:07:21.596 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.596 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:21.596 pt2' 00:07:21.596 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:21.855 [2024-10-09 01:27:20.627030] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f61b2fe2-3c82-4804-8b37-8f2f46aa6327 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f61b2fe2-3c82-4804-8b37-8f2f46aa6327 ']' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.855 [2024-10-09 01:27:20.678819] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.855 [2024-10-09 01:27:20.678884] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.855 [2024-10-09 01:27:20.678981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.855 [2024-10-09 01:27:20.679031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.855 [2024-10-09 01:27:20.679045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.855 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.115 [2024-10-09 01:27:20.806892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:22.115 [2024-10-09 01:27:20.809071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:22.115 [2024-10-09 01:27:20.809185] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:22.115 [2024-10-09 01:27:20.809304] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:22.115 [2024-10-09 01:27:20.809355] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.115 [2024-10-09 01:27:20.809391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:22.115 request: 00:07:22.115 { 00:07:22.115 "name": "raid_bdev1", 00:07:22.115 "raid_level": "raid0", 00:07:22.115 "base_bdevs": [ 00:07:22.115 "malloc1", 00:07:22.115 "malloc2" 00:07:22.115 ], 00:07:22.115 "strip_size_kb": 64, 00:07:22.115 "superblock": false, 00:07:22.115 "method": "bdev_raid_create", 00:07:22.115 "req_id": 1 00:07:22.115 } 00:07:22.115 Got JSON-RPC error response 00:07:22.115 response: 00:07:22.115 { 00:07:22.115 "code": -17, 00:07:22.115 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:22.115 } 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.115 [2024-10-09 01:27:20.874898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:22.115 [2024-10-09 01:27:20.874985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.115 [2024-10-09 01:27:20.875015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:22.115 [2024-10-09 01:27:20.875055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.115 [2024-10-09 01:27:20.877314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.115 [2024-10-09 01:27:20.877393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:22.115 [2024-10-09 01:27:20.877477] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:22.115 [2024-10-09 01:27:20.877565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:22.115 pt1 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.115 "name": "raid_bdev1", 00:07:22.115 "uuid": "f61b2fe2-3c82-4804-8b37-8f2f46aa6327", 00:07:22.115 "strip_size_kb": 64, 00:07:22.115 "state": "configuring", 00:07:22.115 "raid_level": "raid0", 00:07:22.115 "superblock": true, 00:07:22.115 "num_base_bdevs": 2, 00:07:22.115 "num_base_bdevs_discovered": 1, 00:07:22.115 "num_base_bdevs_operational": 2, 00:07:22.115 "base_bdevs_list": [ 00:07:22.115 { 00:07:22.115 "name": "pt1", 00:07:22.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.115 "is_configured": true, 00:07:22.115 "data_offset": 2048, 00:07:22.115 "data_size": 63488 00:07:22.115 }, 00:07:22.115 { 00:07:22.115 "name": null, 00:07:22.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.115 "is_configured": false, 00:07:22.115 "data_offset": 2048, 00:07:22.115 "data_size": 63488 00:07:22.115 } 00:07:22.115 ] 00:07:22.115 }' 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.115 01:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.683 [2024-10-09 01:27:21.302972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:22.683 [2024-10-09 01:27:21.303065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.683 [2024-10-09 01:27:21.303097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:22.683 [2024-10-09 01:27:21.303124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.683 [2024-10-09 01:27:21.303486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.683 [2024-10-09 01:27:21.303566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:22.683 [2024-10-09 01:27:21.303644] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:22.683 [2024-10-09 01:27:21.303692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:22.683 [2024-10-09 01:27:21.303788] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.683 [2024-10-09 01:27:21.303826] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:22.683 [2024-10-09 01:27:21.304074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:22.683 [2024-10-09 01:27:21.304235] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.683 [2024-10-09 01:27:21.304273] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:22.683 [2024-10-09 01:27:21.304407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.683 pt2 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.683 "name": "raid_bdev1", 00:07:22.683 "uuid": "f61b2fe2-3c82-4804-8b37-8f2f46aa6327", 00:07:22.683 "strip_size_kb": 64, 00:07:22.683 "state": "online", 00:07:22.683 "raid_level": "raid0", 00:07:22.683 "superblock": true, 00:07:22.683 "num_base_bdevs": 2, 00:07:22.683 "num_base_bdevs_discovered": 2, 00:07:22.683 "num_base_bdevs_operational": 2, 00:07:22.683 "base_bdevs_list": [ 00:07:22.683 { 00:07:22.683 "name": "pt1", 00:07:22.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.683 "is_configured": true, 00:07:22.683 "data_offset": 2048, 00:07:22.683 "data_size": 63488 00:07:22.683 }, 00:07:22.683 { 00:07:22.683 "name": "pt2", 00:07:22.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.683 "is_configured": true, 00:07:22.683 "data_offset": 2048, 00:07:22.683 "data_size": 63488 00:07:22.683 } 00:07:22.683 ] 00:07:22.683 }' 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.683 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.942 [2024-10-09 01:27:21.763411] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.942 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.942 "name": "raid_bdev1", 00:07:22.942 "aliases": [ 00:07:22.942 "f61b2fe2-3c82-4804-8b37-8f2f46aa6327" 00:07:22.942 ], 00:07:22.942 "product_name": "Raid Volume", 00:07:22.942 "block_size": 512, 00:07:22.942 "num_blocks": 126976, 00:07:22.942 "uuid": "f61b2fe2-3c82-4804-8b37-8f2f46aa6327", 00:07:22.942 "assigned_rate_limits": { 00:07:22.942 "rw_ios_per_sec": 0, 00:07:22.942 "rw_mbytes_per_sec": 0, 00:07:22.942 "r_mbytes_per_sec": 0, 00:07:22.942 "w_mbytes_per_sec": 0 00:07:22.942 }, 00:07:22.942 "claimed": false, 00:07:22.942 "zoned": false, 00:07:22.942 "supported_io_types": { 00:07:22.942 "read": true, 00:07:22.942 "write": true, 00:07:22.942 "unmap": true, 00:07:22.942 "flush": true, 00:07:22.942 "reset": true, 00:07:22.942 "nvme_admin": false, 00:07:22.942 "nvme_io": false, 00:07:22.942 "nvme_io_md": false, 00:07:22.942 "write_zeroes": true, 00:07:22.942 "zcopy": false, 00:07:22.942 "get_zone_info": false, 00:07:22.942 "zone_management": false, 00:07:22.942 "zone_append": false, 00:07:22.942 "compare": false, 00:07:22.942 "compare_and_write": false, 00:07:22.942 "abort": false, 00:07:22.942 "seek_hole": false, 00:07:22.942 "seek_data": false, 00:07:22.942 "copy": false, 00:07:22.942 "nvme_iov_md": false 00:07:22.942 }, 00:07:22.942 "memory_domains": [ 00:07:22.942 { 00:07:22.942 "dma_device_id": "system", 00:07:22.942 "dma_device_type": 1 00:07:22.942 }, 00:07:22.942 { 00:07:22.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.942 "dma_device_type": 2 00:07:22.942 }, 00:07:22.942 { 00:07:22.942 "dma_device_id": "system", 00:07:22.942 "dma_device_type": 1 00:07:22.942 }, 00:07:22.942 { 00:07:22.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.942 "dma_device_type": 2 00:07:22.942 } 00:07:22.942 ], 00:07:22.943 "driver_specific": { 00:07:22.943 "raid": { 00:07:22.943 "uuid": "f61b2fe2-3c82-4804-8b37-8f2f46aa6327", 00:07:22.943 "strip_size_kb": 64, 00:07:22.943 "state": "online", 00:07:22.943 "raid_level": "raid0", 00:07:22.943 "superblock": true, 00:07:22.943 "num_base_bdevs": 2, 00:07:22.943 "num_base_bdevs_discovered": 2, 00:07:22.943 "num_base_bdevs_operational": 2, 00:07:22.943 "base_bdevs_list": [ 00:07:22.943 { 00:07:22.943 "name": "pt1", 00:07:22.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.943 "is_configured": true, 00:07:22.943 "data_offset": 2048, 00:07:22.943 "data_size": 63488 00:07:22.943 }, 00:07:22.943 { 00:07:22.943 "name": "pt2", 00:07:22.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.943 "is_configured": true, 00:07:22.943 "data_offset": 2048, 00:07:22.943 "data_size": 63488 00:07:22.943 } 00:07:22.943 ] 00:07:22.943 } 00:07:22.943 } 00:07:22.943 }' 00:07:22.943 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.943 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:22.943 pt2' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:23.203 [2024-10-09 01:27:21.947419] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f61b2fe2-3c82-4804-8b37-8f2f46aa6327 '!=' f61b2fe2-3c82-4804-8b37-8f2f46aa6327 ']' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73667 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73667 ']' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73667 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.203 01:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73667 00:07:23.203 01:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.203 01:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.203 01:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73667' 00:07:23.203 killing process with pid 73667 00:07:23.203 01:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73667 00:07:23.203 [2024-10-09 01:27:22.035513] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.203 [2024-10-09 01:27:22.035685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.203 01:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73667 00:07:23.203 [2024-10-09 01:27:22.035767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.203 [2024-10-09 01:27:22.035783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:23.203 [2024-10-09 01:27:22.076171] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.773 01:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:23.773 00:07:23.773 real 0m3.503s 00:07:23.773 user 0m5.207s 00:07:23.773 sys 0m0.783s 00:07:23.773 01:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.773 01:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.773 ************************************ 00:07:23.773 END TEST raid_superblock_test 00:07:23.773 ************************************ 00:07:23.773 01:27:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:23.773 01:27:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:23.773 01:27:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.773 01:27:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.773 ************************************ 00:07:23.773 START TEST raid_read_error_test 00:07:23.773 ************************************ 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Clp9gnpfyV 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73862 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73862 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73862 ']' 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.773 01:27:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.773 [2024-10-09 01:27:22.618636] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:23.773 [2024-10-09 01:27:22.618843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73862 ] 00:07:24.032 [2024-10-09 01:27:22.751182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.032 [2024-10-09 01:27:22.778352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.032 [2024-10-09 01:27:22.847587] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.032 [2024-10-09 01:27:22.923554] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.032 [2024-10-09 01:27:22.923702] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.600 BaseBdev1_malloc 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.600 true 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.600 [2024-10-09 01:27:23.482825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:24.600 [2024-10-09 01:27:23.482898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.600 [2024-10-09 01:27:23.482917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:24.600 [2024-10-09 01:27:23.482938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.600 [2024-10-09 01:27:23.485223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.600 [2024-10-09 01:27:23.485260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:24.600 BaseBdev1 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.600 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.859 BaseBdev2_malloc 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.859 true 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.859 [2024-10-09 01:27:23.546001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:24.859 [2024-10-09 01:27:23.546184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.859 [2024-10-09 01:27:23.546219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:24.859 [2024-10-09 01:27:23.546237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.859 BaseBdev2 00:07:24.859 [2024-10-09 01:27:23.549806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.859 [2024-10-09 01:27:23.549850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.859 [2024-10-09 01:27:23.558091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.859 [2024-10-09 01:27:23.560429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.859 [2024-10-09 01:27:23.560657] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:24.859 [2024-10-09 01:27:23.560708] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.859 [2024-10-09 01:27:23.560986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:24.859 [2024-10-09 01:27:23.561153] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:24.859 [2024-10-09 01:27:23.561194] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:24.859 [2024-10-09 01:27:23.561365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.859 "name": "raid_bdev1", 00:07:24.859 "uuid": "bce0edb5-efd8-46be-83c0-d4b331a97ef0", 00:07:24.859 "strip_size_kb": 64, 00:07:24.859 "state": "online", 00:07:24.859 "raid_level": "raid0", 00:07:24.859 "superblock": true, 00:07:24.859 "num_base_bdevs": 2, 00:07:24.859 "num_base_bdevs_discovered": 2, 00:07:24.859 "num_base_bdevs_operational": 2, 00:07:24.859 "base_bdevs_list": [ 00:07:24.859 { 00:07:24.859 "name": "BaseBdev1", 00:07:24.859 "uuid": "85eda44a-747e-5983-ac45-54dc64fd64ad", 00:07:24.859 "is_configured": true, 00:07:24.859 "data_offset": 2048, 00:07:24.859 "data_size": 63488 00:07:24.859 }, 00:07:24.859 { 00:07:24.859 "name": "BaseBdev2", 00:07:24.859 "uuid": "a048f4c7-72df-57fe-9e15-dca31f02ca94", 00:07:24.859 "is_configured": true, 00:07:24.859 "data_offset": 2048, 00:07:24.859 "data_size": 63488 00:07:24.859 } 00:07:24.859 ] 00:07:24.859 }' 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.859 01:27:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 01:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:25.427 01:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:25.427 [2024-10-09 01:27:24.110653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:26.363 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.364 "name": "raid_bdev1", 00:07:26.364 "uuid": "bce0edb5-efd8-46be-83c0-d4b331a97ef0", 00:07:26.364 "strip_size_kb": 64, 00:07:26.364 "state": "online", 00:07:26.364 "raid_level": "raid0", 00:07:26.364 "superblock": true, 00:07:26.364 "num_base_bdevs": 2, 00:07:26.364 "num_base_bdevs_discovered": 2, 00:07:26.364 "num_base_bdevs_operational": 2, 00:07:26.364 "base_bdevs_list": [ 00:07:26.364 { 00:07:26.364 "name": "BaseBdev1", 00:07:26.364 "uuid": "85eda44a-747e-5983-ac45-54dc64fd64ad", 00:07:26.364 "is_configured": true, 00:07:26.364 "data_offset": 2048, 00:07:26.364 "data_size": 63488 00:07:26.364 }, 00:07:26.364 { 00:07:26.364 "name": "BaseBdev2", 00:07:26.364 "uuid": "a048f4c7-72df-57fe-9e15-dca31f02ca94", 00:07:26.364 "is_configured": true, 00:07:26.364 "data_offset": 2048, 00:07:26.364 "data_size": 63488 00:07:26.364 } 00:07:26.364 ] 00:07:26.364 }' 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.364 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.623 [2024-10-09 01:27:25.420960] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:26.623 [2024-10-09 01:27:25.421012] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.623 [2024-10-09 01:27:25.423399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.623 [2024-10-09 01:27:25.423453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.623 [2024-10-09 01:27:25.423486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.623 [2024-10-09 01:27:25.423498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:26.623 { 00:07:26.623 "results": [ 00:07:26.623 { 00:07:26.623 "job": "raid_bdev1", 00:07:26.623 "core_mask": "0x1", 00:07:26.623 "workload": "randrw", 00:07:26.623 "percentage": 50, 00:07:26.623 "status": "finished", 00:07:26.623 "queue_depth": 1, 00:07:26.623 "io_size": 131072, 00:07:26.623 "runtime": 1.308297, 00:07:26.623 "iops": 16301.344419501076, 00:07:26.623 "mibps": 2037.6680524376345, 00:07:26.623 "io_failed": 1, 00:07:26.623 "io_timeout": 0, 00:07:26.623 "avg_latency_us": 85.7908567615189, 00:07:26.623 "min_latency_us": 24.43301664778175, 00:07:26.623 "max_latency_us": 1299.5241000610129 00:07:26.623 } 00:07:26.623 ], 00:07:26.623 "core_count": 1 00:07:26.623 } 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73862 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73862 ']' 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73862 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73862 00:07:26.623 killing process with pid 73862 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73862' 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73862 00:07:26.623 [2024-10-09 01:27:25.473028] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.623 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73862 00:07:26.623 [2024-10-09 01:27:25.500547] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Clp9gnpfyV 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:07:27.190 00:07:27.190 real 0m3.358s 00:07:27.190 user 0m4.088s 00:07:27.190 sys 0m0.609s 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.190 01:27:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.190 ************************************ 00:07:27.190 END TEST raid_read_error_test 00:07:27.190 ************************************ 00:07:27.190 01:27:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:27.190 01:27:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:27.190 01:27:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.190 01:27:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.190 ************************************ 00:07:27.190 START TEST raid_write_error_test 00:07:27.190 ************************************ 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dNOH7mLOh0 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73997 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73997 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73997 ']' 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.190 01:27:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.190 [2024-10-09 01:27:26.053227] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:27.190 [2024-10-09 01:27:26.053364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73997 ] 00:07:27.448 [2024-10-09 01:27:26.189418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.448 [2024-10-09 01:27:26.216091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.448 [2024-10-09 01:27:26.283581] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.707 [2024-10-09 01:27:26.359607] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.707 [2024-10-09 01:27:26.359649] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 BaseBdev1_malloc 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 true 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 [2024-10-09 01:27:26.919129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:28.274 [2024-10-09 01:27:26.919208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.274 [2024-10-09 01:27:26.919232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:28.274 [2024-10-09 01:27:26.919255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.274 [2024-10-09 01:27:26.921635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.274 [2024-10-09 01:27:26.921670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:28.274 BaseBdev1 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 BaseBdev2_malloc 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 true 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 [2024-10-09 01:27:26.976814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:28.274 [2024-10-09 01:27:26.976880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.274 [2024-10-09 01:27:26.976897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:28.274 [2024-10-09 01:27:26.976908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.274 [2024-10-09 01:27:26.979233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.274 [2024-10-09 01:27:26.979272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:28.274 BaseBdev2 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.274 [2024-10-09 01:27:26.988868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.274 [2024-10-09 01:27:26.990944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.274 [2024-10-09 01:27:26.991119] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.274 [2024-10-09 01:27:26.991134] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.274 [2024-10-09 01:27:26.991388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:28.274 [2024-10-09 01:27:26.991557] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.274 [2024-10-09 01:27:26.991573] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:28.274 [2024-10-09 01:27:26.991734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.274 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.275 01:27:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.275 01:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.275 01:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.275 01:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.275 01:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.275 "name": "raid_bdev1", 00:07:28.275 "uuid": "b5082a55-2957-4e9f-b10d-b1a7fa80b4d8", 00:07:28.275 "strip_size_kb": 64, 00:07:28.275 "state": "online", 00:07:28.275 "raid_level": "raid0", 00:07:28.275 "superblock": true, 00:07:28.275 "num_base_bdevs": 2, 00:07:28.275 "num_base_bdevs_discovered": 2, 00:07:28.275 "num_base_bdevs_operational": 2, 00:07:28.275 "base_bdevs_list": [ 00:07:28.275 { 00:07:28.275 "name": "BaseBdev1", 00:07:28.275 "uuid": "6c096eeb-6dc8-5307-b724-a70036a67d7d", 00:07:28.275 "is_configured": true, 00:07:28.275 "data_offset": 2048, 00:07:28.275 "data_size": 63488 00:07:28.275 }, 00:07:28.275 { 00:07:28.275 "name": "BaseBdev2", 00:07:28.275 "uuid": "2beab944-8e2e-585b-a5cf-e64962ef1dd6", 00:07:28.275 "is_configured": true, 00:07:28.275 "data_offset": 2048, 00:07:28.275 "data_size": 63488 00:07:28.275 } 00:07:28.275 ] 00:07:28.275 }' 00:07:28.275 01:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.275 01:27:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.568 01:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:28.568 01:27:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:28.827 [2024-10-09 01:27:27.513439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.764 "name": "raid_bdev1", 00:07:29.764 "uuid": "b5082a55-2957-4e9f-b10d-b1a7fa80b4d8", 00:07:29.764 "strip_size_kb": 64, 00:07:29.764 "state": "online", 00:07:29.764 "raid_level": "raid0", 00:07:29.764 "superblock": true, 00:07:29.764 "num_base_bdevs": 2, 00:07:29.764 "num_base_bdevs_discovered": 2, 00:07:29.764 "num_base_bdevs_operational": 2, 00:07:29.764 "base_bdevs_list": [ 00:07:29.764 { 00:07:29.764 "name": "BaseBdev1", 00:07:29.764 "uuid": "6c096eeb-6dc8-5307-b724-a70036a67d7d", 00:07:29.764 "is_configured": true, 00:07:29.764 "data_offset": 2048, 00:07:29.764 "data_size": 63488 00:07:29.764 }, 00:07:29.764 { 00:07:29.764 "name": "BaseBdev2", 00:07:29.764 "uuid": "2beab944-8e2e-585b-a5cf-e64962ef1dd6", 00:07:29.764 "is_configured": true, 00:07:29.764 "data_offset": 2048, 00:07:29.764 "data_size": 63488 00:07:29.764 } 00:07:29.764 ] 00:07:29.764 }' 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.764 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.024 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:30.024 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.024 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.024 [2024-10-09 01:27:28.908270] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:30.024 [2024-10-09 01:27:28.908336] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.024 [2024-10-09 01:27:28.910853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.024 [2024-10-09 01:27:28.910921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.024 [2024-10-09 01:27:28.910958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:30.024 [2024-10-09 01:27:28.910977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:30.024 { 00:07:30.024 "results": [ 00:07:30.024 { 00:07:30.024 "job": "raid_bdev1", 00:07:30.024 "core_mask": "0x1", 00:07:30.024 "workload": "randrw", 00:07:30.024 "percentage": 50, 00:07:30.024 "status": "finished", 00:07:30.024 "queue_depth": 1, 00:07:30.024 "io_size": 131072, 00:07:30.024 "runtime": 1.392758, 00:07:30.024 "iops": 15750.762156814033, 00:07:30.024 "mibps": 1968.8452696017541, 00:07:30.024 "io_failed": 1, 00:07:30.024 "io_timeout": 0, 00:07:30.024 "avg_latency_us": 88.76809081784543, 00:07:30.024 "min_latency_us": 24.20988407565589, 00:07:30.024 "max_latency_us": 1363.7862808332607 00:07:30.024 } 00:07:30.024 ], 00:07:30.024 "core_count": 1 00:07:30.024 } 00:07:30.024 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.024 01:27:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73997 00:07:30.024 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73997 ']' 00:07:30.024 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73997 00:07:30.283 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:30.283 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.283 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73997 00:07:30.283 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.283 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.283 killing process with pid 73997 00:07:30.283 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73997' 00:07:30.283 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73997 00:07:30.283 [2024-10-09 01:27:28.960193] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.283 01:27:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73997 00:07:30.283 [2024-10-09 01:27:28.988437] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dNOH7mLOh0 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:30.543 00:07:30.543 real 0m3.418s 00:07:30.543 user 0m4.203s 00:07:30.543 sys 0m0.601s 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.543 01:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.543 ************************************ 00:07:30.543 END TEST raid_write_error_test 00:07:30.543 ************************************ 00:07:30.543 01:27:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:30.543 01:27:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:30.543 01:27:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:30.543 01:27:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.543 01:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.803 ************************************ 00:07:30.803 START TEST raid_state_function_test 00:07:30.803 ************************************ 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74129 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74129' 00:07:30.803 Process raid pid: 74129 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74129 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74129 ']' 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.803 01:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.803 [2024-10-09 01:27:29.533453] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:30.803 [2024-10-09 01:27:29.533635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.803 [2024-10-09 01:27:29.666873] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.803 [2024-10-09 01:27:29.694156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.062 [2024-10-09 01:27:29.767511] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.062 [2024-10-09 01:27:29.845603] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.062 [2024-10-09 01:27:29.845640] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.629 [2024-10-09 01:27:30.358813] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.629 [2024-10-09 01:27:30.358873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.629 [2024-10-09 01:27:30.358885] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.629 [2024-10-09 01:27:30.358894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.629 "name": "Existed_Raid", 00:07:31.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.629 "strip_size_kb": 64, 00:07:31.629 "state": "configuring", 00:07:31.629 "raid_level": "concat", 00:07:31.629 "superblock": false, 00:07:31.629 "num_base_bdevs": 2, 00:07:31.629 "num_base_bdevs_discovered": 0, 00:07:31.629 "num_base_bdevs_operational": 2, 00:07:31.629 "base_bdevs_list": [ 00:07:31.629 { 00:07:31.629 "name": "BaseBdev1", 00:07:31.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.629 "is_configured": false, 00:07:31.629 "data_offset": 0, 00:07:31.629 "data_size": 0 00:07:31.629 }, 00:07:31.629 { 00:07:31.629 "name": "BaseBdev2", 00:07:31.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.629 "is_configured": false, 00:07:31.629 "data_offset": 0, 00:07:31.629 "data_size": 0 00:07:31.629 } 00:07:31.629 ] 00:07:31.629 }' 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.629 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.887 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:31.887 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.887 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.887 [2024-10-09 01:27:30.762815] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:31.887 [2024-10-09 01:27:30.762858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:31.887 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.887 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.887 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.887 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.887 [2024-10-09 01:27:30.774820] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.887 [2024-10-09 01:27:30.774854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.887 [2024-10-09 01:27:30.774865] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.887 [2024-10-09 01:27:30.774876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.145 [2024-10-09 01:27:30.802508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.145 BaseBdev1 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.145 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.145 [ 00:07:32.145 { 00:07:32.145 "name": "BaseBdev1", 00:07:32.145 "aliases": [ 00:07:32.145 "7eac7bac-03cc-455e-88d4-74adf58718d0" 00:07:32.145 ], 00:07:32.146 "product_name": "Malloc disk", 00:07:32.146 "block_size": 512, 00:07:32.146 "num_blocks": 65536, 00:07:32.146 "uuid": "7eac7bac-03cc-455e-88d4-74adf58718d0", 00:07:32.146 "assigned_rate_limits": { 00:07:32.146 "rw_ios_per_sec": 0, 00:07:32.146 "rw_mbytes_per_sec": 0, 00:07:32.146 "r_mbytes_per_sec": 0, 00:07:32.146 "w_mbytes_per_sec": 0 00:07:32.146 }, 00:07:32.146 "claimed": true, 00:07:32.146 "claim_type": "exclusive_write", 00:07:32.146 "zoned": false, 00:07:32.146 "supported_io_types": { 00:07:32.146 "read": true, 00:07:32.146 "write": true, 00:07:32.146 "unmap": true, 00:07:32.146 "flush": true, 00:07:32.146 "reset": true, 00:07:32.146 "nvme_admin": false, 00:07:32.146 "nvme_io": false, 00:07:32.146 "nvme_io_md": false, 00:07:32.146 "write_zeroes": true, 00:07:32.146 "zcopy": true, 00:07:32.146 "get_zone_info": false, 00:07:32.146 "zone_management": false, 00:07:32.146 "zone_append": false, 00:07:32.146 "compare": false, 00:07:32.146 "compare_and_write": false, 00:07:32.146 "abort": true, 00:07:32.146 "seek_hole": false, 00:07:32.146 "seek_data": false, 00:07:32.146 "copy": true, 00:07:32.146 "nvme_iov_md": false 00:07:32.146 }, 00:07:32.146 "memory_domains": [ 00:07:32.146 { 00:07:32.146 "dma_device_id": "system", 00:07:32.146 "dma_device_type": 1 00:07:32.146 }, 00:07:32.146 { 00:07:32.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.146 "dma_device_type": 2 00:07:32.146 } 00:07:32.146 ], 00:07:32.146 "driver_specific": {} 00:07:32.146 } 00:07:32.146 ] 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.146 "name": "Existed_Raid", 00:07:32.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.146 "strip_size_kb": 64, 00:07:32.146 "state": "configuring", 00:07:32.146 "raid_level": "concat", 00:07:32.146 "superblock": false, 00:07:32.146 "num_base_bdevs": 2, 00:07:32.146 "num_base_bdevs_discovered": 1, 00:07:32.146 "num_base_bdevs_operational": 2, 00:07:32.146 "base_bdevs_list": [ 00:07:32.146 { 00:07:32.146 "name": "BaseBdev1", 00:07:32.146 "uuid": "7eac7bac-03cc-455e-88d4-74adf58718d0", 00:07:32.146 "is_configured": true, 00:07:32.146 "data_offset": 0, 00:07:32.146 "data_size": 65536 00:07:32.146 }, 00:07:32.146 { 00:07:32.146 "name": "BaseBdev2", 00:07:32.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.146 "is_configured": false, 00:07:32.146 "data_offset": 0, 00:07:32.146 "data_size": 0 00:07:32.146 } 00:07:32.146 ] 00:07:32.146 }' 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.146 01:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.404 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:32.404 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.404 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.404 [2024-10-09 01:27:31.290668] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.404 [2024-10-09 01:27:31.290722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:32.404 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.404 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:32.404 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.663 [2024-10-09 01:27:31.302697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.663 [2024-10-09 01:27:31.304890] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.663 [2024-10-09 01:27:31.304930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.663 "name": "Existed_Raid", 00:07:32.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.663 "strip_size_kb": 64, 00:07:32.663 "state": "configuring", 00:07:32.663 "raid_level": "concat", 00:07:32.663 "superblock": false, 00:07:32.663 "num_base_bdevs": 2, 00:07:32.663 "num_base_bdevs_discovered": 1, 00:07:32.663 "num_base_bdevs_operational": 2, 00:07:32.663 "base_bdevs_list": [ 00:07:32.663 { 00:07:32.663 "name": "BaseBdev1", 00:07:32.663 "uuid": "7eac7bac-03cc-455e-88d4-74adf58718d0", 00:07:32.663 "is_configured": true, 00:07:32.663 "data_offset": 0, 00:07:32.663 "data_size": 65536 00:07:32.663 }, 00:07:32.663 { 00:07:32.663 "name": "BaseBdev2", 00:07:32.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.663 "is_configured": false, 00:07:32.663 "data_offset": 0, 00:07:32.663 "data_size": 0 00:07:32.663 } 00:07:32.663 ] 00:07:32.663 }' 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.663 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.922 [2024-10-09 01:27:31.777792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.922 [2024-10-09 01:27:31.777869] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.922 [2024-10-09 01:27:31.777885] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:32.922 [2024-10-09 01:27:31.778246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:32.922 [2024-10-09 01:27:31.778457] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.922 [2024-10-09 01:27:31.778482] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:32.922 [2024-10-09 01:27:31.778740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.922 BaseBdev2 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.922 [ 00:07:32.922 { 00:07:32.922 "name": "BaseBdev2", 00:07:32.922 "aliases": [ 00:07:32.922 "05d734c2-165d-4b33-898e-57cc38b275bb" 00:07:32.922 ], 00:07:32.922 "product_name": "Malloc disk", 00:07:32.922 "block_size": 512, 00:07:32.922 "num_blocks": 65536, 00:07:32.922 "uuid": "05d734c2-165d-4b33-898e-57cc38b275bb", 00:07:32.922 "assigned_rate_limits": { 00:07:32.922 "rw_ios_per_sec": 0, 00:07:32.922 "rw_mbytes_per_sec": 0, 00:07:32.922 "r_mbytes_per_sec": 0, 00:07:32.922 "w_mbytes_per_sec": 0 00:07:32.922 }, 00:07:32.922 "claimed": true, 00:07:32.922 "claim_type": "exclusive_write", 00:07:32.922 "zoned": false, 00:07:32.922 "supported_io_types": { 00:07:32.922 "read": true, 00:07:32.922 "write": true, 00:07:32.922 "unmap": true, 00:07:32.922 "flush": true, 00:07:32.922 "reset": true, 00:07:32.922 "nvme_admin": false, 00:07:32.922 "nvme_io": false, 00:07:32.922 "nvme_io_md": false, 00:07:32.922 "write_zeroes": true, 00:07:32.922 "zcopy": true, 00:07:32.922 "get_zone_info": false, 00:07:32.922 "zone_management": false, 00:07:32.922 "zone_append": false, 00:07:32.922 "compare": false, 00:07:32.922 "compare_and_write": false, 00:07:32.922 "abort": true, 00:07:32.922 "seek_hole": false, 00:07:32.922 "seek_data": false, 00:07:32.922 "copy": true, 00:07:32.922 "nvme_iov_md": false 00:07:32.922 }, 00:07:32.922 "memory_domains": [ 00:07:32.922 { 00:07:32.922 "dma_device_id": "system", 00:07:32.922 "dma_device_type": 1 00:07:32.922 }, 00:07:32.922 { 00:07:32.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.922 "dma_device_type": 2 00:07:32.922 } 00:07:32.922 ], 00:07:32.922 "driver_specific": {} 00:07:32.922 } 00:07:32.922 ] 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:32.922 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.181 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.182 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.182 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.182 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.182 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.182 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.182 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.182 "name": "Existed_Raid", 00:07:33.182 "uuid": "427c0530-54c6-4961-8e1c-2e53632eb9e2", 00:07:33.182 "strip_size_kb": 64, 00:07:33.182 "state": "online", 00:07:33.182 "raid_level": "concat", 00:07:33.182 "superblock": false, 00:07:33.182 "num_base_bdevs": 2, 00:07:33.182 "num_base_bdevs_discovered": 2, 00:07:33.182 "num_base_bdevs_operational": 2, 00:07:33.182 "base_bdevs_list": [ 00:07:33.182 { 00:07:33.182 "name": "BaseBdev1", 00:07:33.182 "uuid": "7eac7bac-03cc-455e-88d4-74adf58718d0", 00:07:33.182 "is_configured": true, 00:07:33.182 "data_offset": 0, 00:07:33.182 "data_size": 65536 00:07:33.182 }, 00:07:33.182 { 00:07:33.182 "name": "BaseBdev2", 00:07:33.182 "uuid": "05d734c2-165d-4b33-898e-57cc38b275bb", 00:07:33.182 "is_configured": true, 00:07:33.182 "data_offset": 0, 00:07:33.182 "data_size": 65536 00:07:33.182 } 00:07:33.182 ] 00:07:33.182 }' 00:07:33.182 01:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.182 01:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.440 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.440 [2024-10-09 01:27:32.318251] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.700 "name": "Existed_Raid", 00:07:33.700 "aliases": [ 00:07:33.700 "427c0530-54c6-4961-8e1c-2e53632eb9e2" 00:07:33.700 ], 00:07:33.700 "product_name": "Raid Volume", 00:07:33.700 "block_size": 512, 00:07:33.700 "num_blocks": 131072, 00:07:33.700 "uuid": "427c0530-54c6-4961-8e1c-2e53632eb9e2", 00:07:33.700 "assigned_rate_limits": { 00:07:33.700 "rw_ios_per_sec": 0, 00:07:33.700 "rw_mbytes_per_sec": 0, 00:07:33.700 "r_mbytes_per_sec": 0, 00:07:33.700 "w_mbytes_per_sec": 0 00:07:33.700 }, 00:07:33.700 "claimed": false, 00:07:33.700 "zoned": false, 00:07:33.700 "supported_io_types": { 00:07:33.700 "read": true, 00:07:33.700 "write": true, 00:07:33.700 "unmap": true, 00:07:33.700 "flush": true, 00:07:33.700 "reset": true, 00:07:33.700 "nvme_admin": false, 00:07:33.700 "nvme_io": false, 00:07:33.700 "nvme_io_md": false, 00:07:33.700 "write_zeroes": true, 00:07:33.700 "zcopy": false, 00:07:33.700 "get_zone_info": false, 00:07:33.700 "zone_management": false, 00:07:33.700 "zone_append": false, 00:07:33.700 "compare": false, 00:07:33.700 "compare_and_write": false, 00:07:33.700 "abort": false, 00:07:33.700 "seek_hole": false, 00:07:33.700 "seek_data": false, 00:07:33.700 "copy": false, 00:07:33.700 "nvme_iov_md": false 00:07:33.700 }, 00:07:33.700 "memory_domains": [ 00:07:33.700 { 00:07:33.700 "dma_device_id": "system", 00:07:33.700 "dma_device_type": 1 00:07:33.700 }, 00:07:33.700 { 00:07:33.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.700 "dma_device_type": 2 00:07:33.700 }, 00:07:33.700 { 00:07:33.700 "dma_device_id": "system", 00:07:33.700 "dma_device_type": 1 00:07:33.700 }, 00:07:33.700 { 00:07:33.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.700 "dma_device_type": 2 00:07:33.700 } 00:07:33.700 ], 00:07:33.700 "driver_specific": { 00:07:33.700 "raid": { 00:07:33.700 "uuid": "427c0530-54c6-4961-8e1c-2e53632eb9e2", 00:07:33.700 "strip_size_kb": 64, 00:07:33.700 "state": "online", 00:07:33.700 "raid_level": "concat", 00:07:33.700 "superblock": false, 00:07:33.700 "num_base_bdevs": 2, 00:07:33.700 "num_base_bdevs_discovered": 2, 00:07:33.700 "num_base_bdevs_operational": 2, 00:07:33.700 "base_bdevs_list": [ 00:07:33.700 { 00:07:33.700 "name": "BaseBdev1", 00:07:33.700 "uuid": "7eac7bac-03cc-455e-88d4-74adf58718d0", 00:07:33.700 "is_configured": true, 00:07:33.700 "data_offset": 0, 00:07:33.700 "data_size": 65536 00:07:33.700 }, 00:07:33.700 { 00:07:33.700 "name": "BaseBdev2", 00:07:33.700 "uuid": "05d734c2-165d-4b33-898e-57cc38b275bb", 00:07:33.700 "is_configured": true, 00:07:33.700 "data_offset": 0, 00:07:33.700 "data_size": 65536 00:07:33.700 } 00:07:33.700 ] 00:07:33.700 } 00:07:33.700 } 00:07:33.700 }' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:33.700 BaseBdev2' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.700 [2024-10-09 01:27:32.546079] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:33.700 [2024-10-09 01:27:32.546108] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.700 [2024-10-09 01:27:32.546174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.700 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.959 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.959 "name": "Existed_Raid", 00:07:33.959 "uuid": "427c0530-54c6-4961-8e1c-2e53632eb9e2", 00:07:33.959 "strip_size_kb": 64, 00:07:33.959 "state": "offline", 00:07:33.959 "raid_level": "concat", 00:07:33.959 "superblock": false, 00:07:33.959 "num_base_bdevs": 2, 00:07:33.959 "num_base_bdevs_discovered": 1, 00:07:33.959 "num_base_bdevs_operational": 1, 00:07:33.959 "base_bdevs_list": [ 00:07:33.959 { 00:07:33.959 "name": null, 00:07:33.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.959 "is_configured": false, 00:07:33.959 "data_offset": 0, 00:07:33.959 "data_size": 65536 00:07:33.959 }, 00:07:33.959 { 00:07:33.959 "name": "BaseBdev2", 00:07:33.959 "uuid": "05d734c2-165d-4b33-898e-57cc38b275bb", 00:07:33.959 "is_configured": true, 00:07:33.959 "data_offset": 0, 00:07:33.959 "data_size": 65536 00:07:33.959 } 00:07:33.959 ] 00:07:33.959 }' 00:07:33.959 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.959 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.217 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:34.217 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.217 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.217 01:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:34.217 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.217 01:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.217 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.217 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:34.217 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:34.217 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:34.217 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.217 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.217 [2024-10-09 01:27:33.038602] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:34.218 [2024-10-09 01:27:33.038658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:34.218 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.218 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:34.218 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:34.218 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.218 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.218 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.218 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:34.218 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74129 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74129 ']' 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74129 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74129 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.476 killing process with pid 74129 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74129' 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74129 00:07:34.476 [2024-10-09 01:27:33.155835] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.476 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74129 00:07:34.476 [2024-10-09 01:27:33.157396] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:34.736 00:07:34.736 real 0m4.087s 00:07:34.736 user 0m6.219s 00:07:34.736 sys 0m0.918s 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.736 ************************************ 00:07:34.736 END TEST raid_state_function_test 00:07:34.736 ************************************ 00:07:34.736 01:27:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:34.736 01:27:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:34.736 01:27:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.736 01:27:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.736 ************************************ 00:07:34.736 START TEST raid_state_function_test_sb 00:07:34.736 ************************************ 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74370 00:07:34.736 Process raid pid: 74370 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74370' 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74370 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74370 ']' 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.736 01:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.995 [2024-10-09 01:27:33.689149] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:34.995 [2024-10-09 01:27:33.689300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.995 [2024-10-09 01:27:33.822704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.995 [2024-10-09 01:27:33.851645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.254 [2024-10-09 01:27:33.921733] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.254 [2024-10-09 01:27:33.999352] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.254 [2024-10-09 01:27:33.999394] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.823 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.823 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:35.823 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.823 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.823 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.823 [2024-10-09 01:27:34.515661] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.823 [2024-10-09 01:27:34.515717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.823 [2024-10-09 01:27:34.515741] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.823 [2024-10-09 01:27:34.515750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.823 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.824 "name": "Existed_Raid", 00:07:35.824 "uuid": "d486a1a5-46a6-40d1-b7b1-02bb8a64ead6", 00:07:35.824 "strip_size_kb": 64, 00:07:35.824 "state": "configuring", 00:07:35.824 "raid_level": "concat", 00:07:35.824 "superblock": true, 00:07:35.824 "num_base_bdevs": 2, 00:07:35.824 "num_base_bdevs_discovered": 0, 00:07:35.824 "num_base_bdevs_operational": 2, 00:07:35.824 "base_bdevs_list": [ 00:07:35.824 { 00:07:35.824 "name": "BaseBdev1", 00:07:35.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.824 "is_configured": false, 00:07:35.824 "data_offset": 0, 00:07:35.824 "data_size": 0 00:07:35.824 }, 00:07:35.824 { 00:07:35.824 "name": "BaseBdev2", 00:07:35.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.824 "is_configured": false, 00:07:35.824 "data_offset": 0, 00:07:35.824 "data_size": 0 00:07:35.824 } 00:07:35.824 ] 00:07:35.824 }' 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.824 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.083 [2024-10-09 01:27:34.927609] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.083 [2024-10-09 01:27:34.927659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.083 [2024-10-09 01:27:34.939621] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.083 [2024-10-09 01:27:34.939660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.083 [2024-10-09 01:27:34.939671] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.083 [2024-10-09 01:27:34.939681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.083 [2024-10-09 01:27:34.966641] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.083 BaseBdev1 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.083 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.342 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.342 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.342 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.342 01:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.342 [ 00:07:36.342 { 00:07:36.342 "name": "BaseBdev1", 00:07:36.342 "aliases": [ 00:07:36.342 "c95bfaf4-9954-4d0c-8b6f-859a50f4681f" 00:07:36.342 ], 00:07:36.342 "product_name": "Malloc disk", 00:07:36.342 "block_size": 512, 00:07:36.342 "num_blocks": 65536, 00:07:36.342 "uuid": "c95bfaf4-9954-4d0c-8b6f-859a50f4681f", 00:07:36.342 "assigned_rate_limits": { 00:07:36.342 "rw_ios_per_sec": 0, 00:07:36.342 "rw_mbytes_per_sec": 0, 00:07:36.342 "r_mbytes_per_sec": 0, 00:07:36.342 "w_mbytes_per_sec": 0 00:07:36.342 }, 00:07:36.342 "claimed": true, 00:07:36.342 "claim_type": "exclusive_write", 00:07:36.342 "zoned": false, 00:07:36.342 "supported_io_types": { 00:07:36.342 "read": true, 00:07:36.342 "write": true, 00:07:36.342 "unmap": true, 00:07:36.342 "flush": true, 00:07:36.342 "reset": true, 00:07:36.342 "nvme_admin": false, 00:07:36.342 "nvme_io": false, 00:07:36.342 "nvme_io_md": false, 00:07:36.342 "write_zeroes": true, 00:07:36.342 "zcopy": true, 00:07:36.342 "get_zone_info": false, 00:07:36.342 "zone_management": false, 00:07:36.342 "zone_append": false, 00:07:36.342 "compare": false, 00:07:36.342 "compare_and_write": false, 00:07:36.342 "abort": true, 00:07:36.342 "seek_hole": false, 00:07:36.342 "seek_data": false, 00:07:36.342 "copy": true, 00:07:36.342 "nvme_iov_md": false 00:07:36.342 }, 00:07:36.342 "memory_domains": [ 00:07:36.342 { 00:07:36.342 "dma_device_id": "system", 00:07:36.342 "dma_device_type": 1 00:07:36.342 }, 00:07:36.342 { 00:07:36.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.342 "dma_device_type": 2 00:07:36.342 } 00:07:36.342 ], 00:07:36.342 "driver_specific": {} 00:07:36.342 } 00:07:36.342 ] 00:07:36.342 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.342 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:36.342 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.342 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.342 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.342 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.343 "name": "Existed_Raid", 00:07:36.343 "uuid": "ebf2c315-6867-41a5-b616-43e985029959", 00:07:36.343 "strip_size_kb": 64, 00:07:36.343 "state": "configuring", 00:07:36.343 "raid_level": "concat", 00:07:36.343 "superblock": true, 00:07:36.343 "num_base_bdevs": 2, 00:07:36.343 "num_base_bdevs_discovered": 1, 00:07:36.343 "num_base_bdevs_operational": 2, 00:07:36.343 "base_bdevs_list": [ 00:07:36.343 { 00:07:36.343 "name": "BaseBdev1", 00:07:36.343 "uuid": "c95bfaf4-9954-4d0c-8b6f-859a50f4681f", 00:07:36.343 "is_configured": true, 00:07:36.343 "data_offset": 2048, 00:07:36.343 "data_size": 63488 00:07:36.343 }, 00:07:36.343 { 00:07:36.343 "name": "BaseBdev2", 00:07:36.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.343 "is_configured": false, 00:07:36.343 "data_offset": 0, 00:07:36.343 "data_size": 0 00:07:36.343 } 00:07:36.343 ] 00:07:36.343 }' 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.343 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.603 [2024-10-09 01:27:35.418786] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.603 [2024-10-09 01:27:35.418839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.603 [2024-10-09 01:27:35.430808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.603 [2024-10-09 01:27:35.432959] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.603 [2024-10-09 01:27:35.433000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.603 "name": "Existed_Raid", 00:07:36.603 "uuid": "d5e2d2ae-069d-4093-9601-53065ec09243", 00:07:36.603 "strip_size_kb": 64, 00:07:36.603 "state": "configuring", 00:07:36.603 "raid_level": "concat", 00:07:36.603 "superblock": true, 00:07:36.603 "num_base_bdevs": 2, 00:07:36.603 "num_base_bdevs_discovered": 1, 00:07:36.603 "num_base_bdevs_operational": 2, 00:07:36.603 "base_bdevs_list": [ 00:07:36.603 { 00:07:36.603 "name": "BaseBdev1", 00:07:36.603 "uuid": "c95bfaf4-9954-4d0c-8b6f-859a50f4681f", 00:07:36.603 "is_configured": true, 00:07:36.603 "data_offset": 2048, 00:07:36.603 "data_size": 63488 00:07:36.603 }, 00:07:36.603 { 00:07:36.603 "name": "BaseBdev2", 00:07:36.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.603 "is_configured": false, 00:07:36.603 "data_offset": 0, 00:07:36.603 "data_size": 0 00:07:36.603 } 00:07:36.603 ] 00:07:36.603 }' 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.603 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.173 [2024-10-09 01:27:35.878602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.173 [2024-10-09 01:27:35.878970] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.173 [2024-10-09 01:27:35.879021] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.173 BaseBdev2 00:07:37.173 [2024-10-09 01:27:35.879684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:37.173 [2024-10-09 01:27:35.880011] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.173 [2024-10-09 01:27:35.880053] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.173 [2024-10-09 01:27:35.880312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.173 [ 00:07:37.173 { 00:07:37.173 "name": "BaseBdev2", 00:07:37.173 "aliases": [ 00:07:37.173 "21770682-71ab-4de9-8a8e-ff6ca94280d7" 00:07:37.173 ], 00:07:37.173 "product_name": "Malloc disk", 00:07:37.173 "block_size": 512, 00:07:37.173 "num_blocks": 65536, 00:07:37.173 "uuid": "21770682-71ab-4de9-8a8e-ff6ca94280d7", 00:07:37.173 "assigned_rate_limits": { 00:07:37.173 "rw_ios_per_sec": 0, 00:07:37.173 "rw_mbytes_per_sec": 0, 00:07:37.173 "r_mbytes_per_sec": 0, 00:07:37.173 "w_mbytes_per_sec": 0 00:07:37.173 }, 00:07:37.173 "claimed": true, 00:07:37.173 "claim_type": "exclusive_write", 00:07:37.173 "zoned": false, 00:07:37.173 "supported_io_types": { 00:07:37.173 "read": true, 00:07:37.173 "write": true, 00:07:37.173 "unmap": true, 00:07:37.173 "flush": true, 00:07:37.173 "reset": true, 00:07:37.173 "nvme_admin": false, 00:07:37.173 "nvme_io": false, 00:07:37.173 "nvme_io_md": false, 00:07:37.173 "write_zeroes": true, 00:07:37.173 "zcopy": true, 00:07:37.173 "get_zone_info": false, 00:07:37.173 "zone_management": false, 00:07:37.173 "zone_append": false, 00:07:37.173 "compare": false, 00:07:37.173 "compare_and_write": false, 00:07:37.173 "abort": true, 00:07:37.173 "seek_hole": false, 00:07:37.173 "seek_data": false, 00:07:37.173 "copy": true, 00:07:37.173 "nvme_iov_md": false 00:07:37.173 }, 00:07:37.173 "memory_domains": [ 00:07:37.173 { 00:07:37.173 "dma_device_id": "system", 00:07:37.173 "dma_device_type": 1 00:07:37.173 }, 00:07:37.173 { 00:07:37.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.173 "dma_device_type": 2 00:07:37.173 } 00:07:37.173 ], 00:07:37.173 "driver_specific": {} 00:07:37.173 } 00:07:37.173 ] 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.173 "name": "Existed_Raid", 00:07:37.173 "uuid": "d5e2d2ae-069d-4093-9601-53065ec09243", 00:07:37.173 "strip_size_kb": 64, 00:07:37.173 "state": "online", 00:07:37.173 "raid_level": "concat", 00:07:37.173 "superblock": true, 00:07:37.173 "num_base_bdevs": 2, 00:07:37.173 "num_base_bdevs_discovered": 2, 00:07:37.173 "num_base_bdevs_operational": 2, 00:07:37.173 "base_bdevs_list": [ 00:07:37.173 { 00:07:37.173 "name": "BaseBdev1", 00:07:37.173 "uuid": "c95bfaf4-9954-4d0c-8b6f-859a50f4681f", 00:07:37.173 "is_configured": true, 00:07:37.173 "data_offset": 2048, 00:07:37.173 "data_size": 63488 00:07:37.173 }, 00:07:37.173 { 00:07:37.173 "name": "BaseBdev2", 00:07:37.173 "uuid": "21770682-71ab-4de9-8a8e-ff6ca94280d7", 00:07:37.173 "is_configured": true, 00:07:37.173 "data_offset": 2048, 00:07:37.173 "data_size": 63488 00:07:37.173 } 00:07:37.173 ] 00:07:37.173 }' 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.173 01:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.433 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.433 [2024-10-09 01:27:36.306975] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.693 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.693 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.693 "name": "Existed_Raid", 00:07:37.693 "aliases": [ 00:07:37.693 "d5e2d2ae-069d-4093-9601-53065ec09243" 00:07:37.693 ], 00:07:37.693 "product_name": "Raid Volume", 00:07:37.693 "block_size": 512, 00:07:37.693 "num_blocks": 126976, 00:07:37.693 "uuid": "d5e2d2ae-069d-4093-9601-53065ec09243", 00:07:37.693 "assigned_rate_limits": { 00:07:37.693 "rw_ios_per_sec": 0, 00:07:37.693 "rw_mbytes_per_sec": 0, 00:07:37.693 "r_mbytes_per_sec": 0, 00:07:37.693 "w_mbytes_per_sec": 0 00:07:37.693 }, 00:07:37.693 "claimed": false, 00:07:37.693 "zoned": false, 00:07:37.693 "supported_io_types": { 00:07:37.693 "read": true, 00:07:37.693 "write": true, 00:07:37.693 "unmap": true, 00:07:37.693 "flush": true, 00:07:37.693 "reset": true, 00:07:37.693 "nvme_admin": false, 00:07:37.693 "nvme_io": false, 00:07:37.693 "nvme_io_md": false, 00:07:37.693 "write_zeroes": true, 00:07:37.693 "zcopy": false, 00:07:37.693 "get_zone_info": false, 00:07:37.693 "zone_management": false, 00:07:37.693 "zone_append": false, 00:07:37.693 "compare": false, 00:07:37.693 "compare_and_write": false, 00:07:37.693 "abort": false, 00:07:37.693 "seek_hole": false, 00:07:37.693 "seek_data": false, 00:07:37.693 "copy": false, 00:07:37.693 "nvme_iov_md": false 00:07:37.693 }, 00:07:37.693 "memory_domains": [ 00:07:37.693 { 00:07:37.693 "dma_device_id": "system", 00:07:37.693 "dma_device_type": 1 00:07:37.693 }, 00:07:37.693 { 00:07:37.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.693 "dma_device_type": 2 00:07:37.693 }, 00:07:37.693 { 00:07:37.693 "dma_device_id": "system", 00:07:37.693 "dma_device_type": 1 00:07:37.693 }, 00:07:37.693 { 00:07:37.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.693 "dma_device_type": 2 00:07:37.693 } 00:07:37.693 ], 00:07:37.693 "driver_specific": { 00:07:37.693 "raid": { 00:07:37.693 "uuid": "d5e2d2ae-069d-4093-9601-53065ec09243", 00:07:37.693 "strip_size_kb": 64, 00:07:37.693 "state": "online", 00:07:37.693 "raid_level": "concat", 00:07:37.693 "superblock": true, 00:07:37.693 "num_base_bdevs": 2, 00:07:37.693 "num_base_bdevs_discovered": 2, 00:07:37.693 "num_base_bdevs_operational": 2, 00:07:37.693 "base_bdevs_list": [ 00:07:37.693 { 00:07:37.693 "name": "BaseBdev1", 00:07:37.693 "uuid": "c95bfaf4-9954-4d0c-8b6f-859a50f4681f", 00:07:37.693 "is_configured": true, 00:07:37.693 "data_offset": 2048, 00:07:37.693 "data_size": 63488 00:07:37.693 }, 00:07:37.693 { 00:07:37.693 "name": "BaseBdev2", 00:07:37.693 "uuid": "21770682-71ab-4de9-8a8e-ff6ca94280d7", 00:07:37.693 "is_configured": true, 00:07:37.693 "data_offset": 2048, 00:07:37.693 "data_size": 63488 00:07:37.693 } 00:07:37.693 ] 00:07:37.693 } 00:07:37.693 } 00:07:37.693 }' 00:07:37.693 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.693 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:37.693 BaseBdev2' 00:07:37.693 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.693 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.694 [2024-10-09 01:27:36.530867] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:37.694 [2024-10-09 01:27:36.530910] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.694 [2024-10-09 01:27:36.530977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.694 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.954 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.954 "name": "Existed_Raid", 00:07:37.954 "uuid": "d5e2d2ae-069d-4093-9601-53065ec09243", 00:07:37.954 "strip_size_kb": 64, 00:07:37.954 "state": "offline", 00:07:37.954 "raid_level": "concat", 00:07:37.954 "superblock": true, 00:07:37.954 "num_base_bdevs": 2, 00:07:37.954 "num_base_bdevs_discovered": 1, 00:07:37.954 "num_base_bdevs_operational": 1, 00:07:37.954 "base_bdevs_list": [ 00:07:37.954 { 00:07:37.954 "name": null, 00:07:37.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.954 "is_configured": false, 00:07:37.954 "data_offset": 0, 00:07:37.954 "data_size": 63488 00:07:37.954 }, 00:07:37.954 { 00:07:37.954 "name": "BaseBdev2", 00:07:37.954 "uuid": "21770682-71ab-4de9-8a8e-ff6ca94280d7", 00:07:37.954 "is_configured": true, 00:07:37.954 "data_offset": 2048, 00:07:37.954 "data_size": 63488 00:07:37.954 } 00:07:37.954 ] 00:07:37.954 }' 00:07:37.954 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.954 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.213 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.213 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.213 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.213 01:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.213 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.213 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.213 01:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.213 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.214 [2024-10-09 01:27:37.007683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:38.214 [2024-10-09 01:27:37.007761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74370 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74370 ']' 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74370 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.214 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74370 00:07:38.473 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.473 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.473 killing process with pid 74370 00:07:38.473 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74370' 00:07:38.473 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74370 00:07:38.473 [2024-10-09 01:27:37.128671] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.473 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74370 00:07:38.473 [2024-10-09 01:27:37.130252] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.732 01:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:38.732 00:07:38.732 real 0m3.908s 00:07:38.732 user 0m5.851s 00:07:38.732 sys 0m0.941s 00:07:38.732 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.732 01:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.732 ************************************ 00:07:38.732 END TEST raid_state_function_test_sb 00:07:38.732 ************************************ 00:07:38.732 01:27:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:38.732 01:27:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:38.732 01:27:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.732 01:27:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.732 ************************************ 00:07:38.732 START TEST raid_superblock_test 00:07:38.732 ************************************ 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74607 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74607 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74607 ']' 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.732 01:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.992 [2024-10-09 01:27:37.666373] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:38.992 [2024-10-09 01:27:37.666561] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74607 ] 00:07:38.992 [2024-10-09 01:27:37.800323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.992 [2024-10-09 01:27:37.830479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.251 [2024-10-09 01:27:37.906836] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.251 [2024-10-09 01:27:37.983155] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.251 [2024-10-09 01:27:37.983209] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 malloc1 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 [2024-10-09 01:27:38.526717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:39.820 [2024-10-09 01:27:38.526807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.820 [2024-10-09 01:27:38.526829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:39.820 [2024-10-09 01:27:38.526840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.820 [2024-10-09 01:27:38.529275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.820 [2024-10-09 01:27:38.529317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:39.820 pt1 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 malloc2 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 [2024-10-09 01:27:38.571930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:39.820 [2024-10-09 01:27:38.571996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.820 [2024-10-09 01:27:38.572018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:39.820 [2024-10-09 01:27:38.572030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.820 [2024-10-09 01:27:38.574951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.820 [2024-10-09 01:27:38.574993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:39.820 pt2 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 [2024-10-09 01:27:38.583996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:39.820 [2024-10-09 01:27:38.586256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:39.820 [2024-10-09 01:27:38.586416] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:39.820 [2024-10-09 01:27:38.586429] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.820 [2024-10-09 01:27:38.586700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:39.820 [2024-10-09 01:27:38.586852] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:39.820 [2024-10-09 01:27:38.586873] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:39.820 [2024-10-09 01:27:38.586998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.820 "name": "raid_bdev1", 00:07:39.820 "uuid": "bbe76806-cebc-4686-8ffa-20b542ee6a53", 00:07:39.820 "strip_size_kb": 64, 00:07:39.820 "state": "online", 00:07:39.820 "raid_level": "concat", 00:07:39.820 "superblock": true, 00:07:39.820 "num_base_bdevs": 2, 00:07:39.820 "num_base_bdevs_discovered": 2, 00:07:39.820 "num_base_bdevs_operational": 2, 00:07:39.820 "base_bdevs_list": [ 00:07:39.820 { 00:07:39.820 "name": "pt1", 00:07:39.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.820 "is_configured": true, 00:07:39.820 "data_offset": 2048, 00:07:39.820 "data_size": 63488 00:07:39.820 }, 00:07:39.820 { 00:07:39.820 "name": "pt2", 00:07:39.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.820 "is_configured": true, 00:07:39.820 "data_offset": 2048, 00:07:39.820 "data_size": 63488 00:07:39.820 } 00:07:39.820 ] 00:07:39.820 }' 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.820 01:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.390 [2024-10-09 01:27:39.048557] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:40.390 "name": "raid_bdev1", 00:07:40.390 "aliases": [ 00:07:40.390 "bbe76806-cebc-4686-8ffa-20b542ee6a53" 00:07:40.390 ], 00:07:40.390 "product_name": "Raid Volume", 00:07:40.390 "block_size": 512, 00:07:40.390 "num_blocks": 126976, 00:07:40.390 "uuid": "bbe76806-cebc-4686-8ffa-20b542ee6a53", 00:07:40.390 "assigned_rate_limits": { 00:07:40.390 "rw_ios_per_sec": 0, 00:07:40.390 "rw_mbytes_per_sec": 0, 00:07:40.390 "r_mbytes_per_sec": 0, 00:07:40.390 "w_mbytes_per_sec": 0 00:07:40.390 }, 00:07:40.390 "claimed": false, 00:07:40.390 "zoned": false, 00:07:40.390 "supported_io_types": { 00:07:40.390 "read": true, 00:07:40.390 "write": true, 00:07:40.390 "unmap": true, 00:07:40.390 "flush": true, 00:07:40.390 "reset": true, 00:07:40.390 "nvme_admin": false, 00:07:40.390 "nvme_io": false, 00:07:40.390 "nvme_io_md": false, 00:07:40.390 "write_zeroes": true, 00:07:40.390 "zcopy": false, 00:07:40.390 "get_zone_info": false, 00:07:40.390 "zone_management": false, 00:07:40.390 "zone_append": false, 00:07:40.390 "compare": false, 00:07:40.390 "compare_and_write": false, 00:07:40.390 "abort": false, 00:07:40.390 "seek_hole": false, 00:07:40.390 "seek_data": false, 00:07:40.390 "copy": false, 00:07:40.390 "nvme_iov_md": false 00:07:40.390 }, 00:07:40.390 "memory_domains": [ 00:07:40.390 { 00:07:40.390 "dma_device_id": "system", 00:07:40.390 "dma_device_type": 1 00:07:40.390 }, 00:07:40.390 { 00:07:40.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.390 "dma_device_type": 2 00:07:40.390 }, 00:07:40.390 { 00:07:40.390 "dma_device_id": "system", 00:07:40.390 "dma_device_type": 1 00:07:40.390 }, 00:07:40.390 { 00:07:40.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.390 "dma_device_type": 2 00:07:40.390 } 00:07:40.390 ], 00:07:40.390 "driver_specific": { 00:07:40.390 "raid": { 00:07:40.390 "uuid": "bbe76806-cebc-4686-8ffa-20b542ee6a53", 00:07:40.390 "strip_size_kb": 64, 00:07:40.390 "state": "online", 00:07:40.390 "raid_level": "concat", 00:07:40.390 "superblock": true, 00:07:40.390 "num_base_bdevs": 2, 00:07:40.390 "num_base_bdevs_discovered": 2, 00:07:40.390 "num_base_bdevs_operational": 2, 00:07:40.390 "base_bdevs_list": [ 00:07:40.390 { 00:07:40.390 "name": "pt1", 00:07:40.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.390 "is_configured": true, 00:07:40.390 "data_offset": 2048, 00:07:40.390 "data_size": 63488 00:07:40.390 }, 00:07:40.390 { 00:07:40.390 "name": "pt2", 00:07:40.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.390 "is_configured": true, 00:07:40.390 "data_offset": 2048, 00:07:40.390 "data_size": 63488 00:07:40.390 } 00:07:40.390 ] 00:07:40.390 } 00:07:40.390 } 00:07:40.390 }' 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:40.390 pt2' 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.390 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:40.391 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 [2024-10-09 01:27:39.292343] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bbe76806-cebc-4686-8ffa-20b542ee6a53 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bbe76806-cebc-4686-8ffa-20b542ee6a53 ']' 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 [2024-10-09 01:27:39.336121] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.651 [2024-10-09 01:27:39.336149] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.651 [2024-10-09 01:27:39.336238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.651 [2024-10-09 01:27:39.336301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.651 [2024-10-09 01:27:39.336314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 [2024-10-09 01:27:39.472189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:40.651 [2024-10-09 01:27:39.474332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:40.651 [2024-10-09 01:27:39.474416] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:40.651 [2024-10-09 01:27:39.474464] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:40.651 [2024-10-09 01:27:39.474480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.651 [2024-10-09 01:27:39.474490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:40.651 request: 00:07:40.651 { 00:07:40.651 "name": "raid_bdev1", 00:07:40.651 "raid_level": "concat", 00:07:40.651 "base_bdevs": [ 00:07:40.651 "malloc1", 00:07:40.651 "malloc2" 00:07:40.651 ], 00:07:40.651 "strip_size_kb": 64, 00:07:40.651 "superblock": false, 00:07:40.651 "method": "bdev_raid_create", 00:07:40.651 "req_id": 1 00:07:40.651 } 00:07:40.651 Got JSON-RPC error response 00:07:40.651 response: 00:07:40.651 { 00:07:40.651 "code": -17, 00:07:40.651 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:40.651 } 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.651 [2024-10-09 01:27:39.536191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:40.651 [2024-10-09 01:27:39.536253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.651 [2024-10-09 01:27:39.536270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:40.651 [2024-10-09 01:27:39.536284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.651 [2024-10-09 01:27:39.538727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.651 [2024-10-09 01:27:39.538762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:40.651 [2024-10-09 01:27:39.538826] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:40.651 [2024-10-09 01:27:39.538871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:40.651 pt1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.651 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.911 "name": "raid_bdev1", 00:07:40.911 "uuid": "bbe76806-cebc-4686-8ffa-20b542ee6a53", 00:07:40.911 "strip_size_kb": 64, 00:07:40.911 "state": "configuring", 00:07:40.911 "raid_level": "concat", 00:07:40.911 "superblock": true, 00:07:40.911 "num_base_bdevs": 2, 00:07:40.911 "num_base_bdevs_discovered": 1, 00:07:40.911 "num_base_bdevs_operational": 2, 00:07:40.911 "base_bdevs_list": [ 00:07:40.911 { 00:07:40.911 "name": "pt1", 00:07:40.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.911 "is_configured": true, 00:07:40.911 "data_offset": 2048, 00:07:40.911 "data_size": 63488 00:07:40.911 }, 00:07:40.911 { 00:07:40.911 "name": null, 00:07:40.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.911 "is_configured": false, 00:07:40.911 "data_offset": 2048, 00:07:40.911 "data_size": 63488 00:07:40.911 } 00:07:40.911 ] 00:07:40.911 }' 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.911 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.172 [2024-10-09 01:27:39.972305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:41.172 [2024-10-09 01:27:39.972390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.172 [2024-10-09 01:27:39.972411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:41.172 [2024-10-09 01:27:39.972422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.172 [2024-10-09 01:27:39.972849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.172 [2024-10-09 01:27:39.972878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:41.172 [2024-10-09 01:27:39.972946] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:41.172 [2024-10-09 01:27:39.972970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:41.172 [2024-10-09 01:27:39.973058] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:41.172 [2024-10-09 01:27:39.973081] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.172 [2024-10-09 01:27:39.973329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:41.172 [2024-10-09 01:27:39.973463] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:41.172 [2024-10-09 01:27:39.973479] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:41.172 [2024-10-09 01:27:39.973604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.172 pt2 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.172 01:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.172 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.172 "name": "raid_bdev1", 00:07:41.172 "uuid": "bbe76806-cebc-4686-8ffa-20b542ee6a53", 00:07:41.172 "strip_size_kb": 64, 00:07:41.172 "state": "online", 00:07:41.172 "raid_level": "concat", 00:07:41.172 "superblock": true, 00:07:41.172 "num_base_bdevs": 2, 00:07:41.172 "num_base_bdevs_discovered": 2, 00:07:41.172 "num_base_bdevs_operational": 2, 00:07:41.172 "base_bdevs_list": [ 00:07:41.172 { 00:07:41.172 "name": "pt1", 00:07:41.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.172 "is_configured": true, 00:07:41.172 "data_offset": 2048, 00:07:41.172 "data_size": 63488 00:07:41.172 }, 00:07:41.172 { 00:07:41.172 "name": "pt2", 00:07:41.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.172 "is_configured": true, 00:07:41.172 "data_offset": 2048, 00:07:41.172 "data_size": 63488 00:07:41.172 } 00:07:41.172 ] 00:07:41.172 }' 00:07:41.172 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.172 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.743 [2024-10-09 01:27:40.420767] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:41.743 "name": "raid_bdev1", 00:07:41.743 "aliases": [ 00:07:41.743 "bbe76806-cebc-4686-8ffa-20b542ee6a53" 00:07:41.743 ], 00:07:41.743 "product_name": "Raid Volume", 00:07:41.743 "block_size": 512, 00:07:41.743 "num_blocks": 126976, 00:07:41.743 "uuid": "bbe76806-cebc-4686-8ffa-20b542ee6a53", 00:07:41.743 "assigned_rate_limits": { 00:07:41.743 "rw_ios_per_sec": 0, 00:07:41.743 "rw_mbytes_per_sec": 0, 00:07:41.743 "r_mbytes_per_sec": 0, 00:07:41.743 "w_mbytes_per_sec": 0 00:07:41.743 }, 00:07:41.743 "claimed": false, 00:07:41.743 "zoned": false, 00:07:41.743 "supported_io_types": { 00:07:41.743 "read": true, 00:07:41.743 "write": true, 00:07:41.743 "unmap": true, 00:07:41.743 "flush": true, 00:07:41.743 "reset": true, 00:07:41.743 "nvme_admin": false, 00:07:41.743 "nvme_io": false, 00:07:41.743 "nvme_io_md": false, 00:07:41.743 "write_zeroes": true, 00:07:41.743 "zcopy": false, 00:07:41.743 "get_zone_info": false, 00:07:41.743 "zone_management": false, 00:07:41.743 "zone_append": false, 00:07:41.743 "compare": false, 00:07:41.743 "compare_and_write": false, 00:07:41.743 "abort": false, 00:07:41.743 "seek_hole": false, 00:07:41.743 "seek_data": false, 00:07:41.743 "copy": false, 00:07:41.743 "nvme_iov_md": false 00:07:41.743 }, 00:07:41.743 "memory_domains": [ 00:07:41.743 { 00:07:41.743 "dma_device_id": "system", 00:07:41.743 "dma_device_type": 1 00:07:41.743 }, 00:07:41.743 { 00:07:41.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.743 "dma_device_type": 2 00:07:41.743 }, 00:07:41.743 { 00:07:41.743 "dma_device_id": "system", 00:07:41.743 "dma_device_type": 1 00:07:41.743 }, 00:07:41.743 { 00:07:41.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.743 "dma_device_type": 2 00:07:41.743 } 00:07:41.743 ], 00:07:41.743 "driver_specific": { 00:07:41.743 "raid": { 00:07:41.743 "uuid": "bbe76806-cebc-4686-8ffa-20b542ee6a53", 00:07:41.743 "strip_size_kb": 64, 00:07:41.743 "state": "online", 00:07:41.743 "raid_level": "concat", 00:07:41.743 "superblock": true, 00:07:41.743 "num_base_bdevs": 2, 00:07:41.743 "num_base_bdevs_discovered": 2, 00:07:41.743 "num_base_bdevs_operational": 2, 00:07:41.743 "base_bdevs_list": [ 00:07:41.743 { 00:07:41.743 "name": "pt1", 00:07:41.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.743 "is_configured": true, 00:07:41.743 "data_offset": 2048, 00:07:41.743 "data_size": 63488 00:07:41.743 }, 00:07:41.743 { 00:07:41.743 "name": "pt2", 00:07:41.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.743 "is_configured": true, 00:07:41.743 "data_offset": 2048, 00:07:41.743 "data_size": 63488 00:07:41.743 } 00:07:41.743 ] 00:07:41.743 } 00:07:41.743 } 00:07:41.743 }' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:41.743 pt2' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.743 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.004 [2024-10-09 01:27:40.664915] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bbe76806-cebc-4686-8ffa-20b542ee6a53 '!=' bbe76806-cebc-4686-8ffa-20b542ee6a53 ']' 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74607 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74607 ']' 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74607 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74607 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.004 killing process with pid 74607 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74607' 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74607 00:07:42.004 [2024-10-09 01:27:40.736307] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.004 [2024-10-09 01:27:40.736446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.004 [2024-10-09 01:27:40.736507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.004 01:27:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74607 00:07:42.004 [2024-10-09 01:27:40.736531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:42.004 [2024-10-09 01:27:40.777419] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.264 01:27:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:42.264 00:07:42.264 real 0m3.575s 00:07:42.264 user 0m5.349s 00:07:42.264 sys 0m0.809s 00:07:42.264 01:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.264 01:27:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.264 ************************************ 00:07:42.264 END TEST raid_superblock_test 00:07:42.264 ************************************ 00:07:42.524 01:27:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:42.524 01:27:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:42.524 01:27:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.524 01:27:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.524 ************************************ 00:07:42.524 START TEST raid_read_error_test 00:07:42.524 ************************************ 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YsCrw0MyPi 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74813 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74813 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74813 ']' 00:07:42.524 01:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.525 01:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.525 01:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.525 01:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.525 01:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.525 [2024-10-09 01:27:41.327997] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:42.525 [2024-10-09 01:27:41.328103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74813 ] 00:07:42.786 [2024-10-09 01:27:41.458287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:42.786 [2024-10-09 01:27:41.471346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.787 [2024-10-09 01:27:41.541143] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.787 [2024-10-09 01:27:41.617361] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.787 [2024-10-09 01:27:41.617414] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.358 BaseBdev1_malloc 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.358 true 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.358 [2024-10-09 01:27:42.188399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:43.358 [2024-10-09 01:27:42.188469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.358 [2024-10-09 01:27:42.188489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:43.358 [2024-10-09 01:27:42.188504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.358 [2024-10-09 01:27:42.190885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.358 [2024-10-09 01:27:42.190925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:43.358 BaseBdev1 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.358 BaseBdev2_malloc 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.358 true 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.358 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.358 [2024-10-09 01:27:42.245259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:43.358 [2024-10-09 01:27:42.245329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.358 [2024-10-09 01:27:42.245347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:43.358 [2024-10-09 01:27:42.245359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.358 [2024-10-09 01:27:42.247807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.358 [2024-10-09 01:27:42.247852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:43.358 BaseBdev2 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.618 [2024-10-09 01:27:42.257334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.618 [2024-10-09 01:27:42.259507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.618 [2024-10-09 01:27:42.259728] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:43.618 [2024-10-09 01:27:42.259743] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.618 [2024-10-09 01:27:42.260063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:43.618 [2024-10-09 01:27:42.260236] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:43.618 [2024-10-09 01:27:42.260253] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:43.618 [2024-10-09 01:27:42.260444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.618 "name": "raid_bdev1", 00:07:43.618 "uuid": "59026365-84e3-4c17-9e1a-f7f84b7f4684", 00:07:43.618 "strip_size_kb": 64, 00:07:43.618 "state": "online", 00:07:43.618 "raid_level": "concat", 00:07:43.618 "superblock": true, 00:07:43.618 "num_base_bdevs": 2, 00:07:43.618 "num_base_bdevs_discovered": 2, 00:07:43.618 "num_base_bdevs_operational": 2, 00:07:43.618 "base_bdevs_list": [ 00:07:43.618 { 00:07:43.618 "name": "BaseBdev1", 00:07:43.618 "uuid": "193a48a6-2fcb-5ccc-8410-1e0a50ae62b8", 00:07:43.618 "is_configured": true, 00:07:43.618 "data_offset": 2048, 00:07:43.618 "data_size": 63488 00:07:43.618 }, 00:07:43.618 { 00:07:43.618 "name": "BaseBdev2", 00:07:43.618 "uuid": "241b36e0-1ddd-59db-91b8-bc9aacf6a505", 00:07:43.618 "is_configured": true, 00:07:43.618 "data_offset": 2048, 00:07:43.618 "data_size": 63488 00:07:43.618 } 00:07:43.618 ] 00:07:43.618 }' 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.618 01:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.878 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:43.878 01:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:44.138 [2024-10-09 01:27:42.797966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.078 "name": "raid_bdev1", 00:07:45.078 "uuid": "59026365-84e3-4c17-9e1a-f7f84b7f4684", 00:07:45.078 "strip_size_kb": 64, 00:07:45.078 "state": "online", 00:07:45.078 "raid_level": "concat", 00:07:45.078 "superblock": true, 00:07:45.078 "num_base_bdevs": 2, 00:07:45.078 "num_base_bdevs_discovered": 2, 00:07:45.078 "num_base_bdevs_operational": 2, 00:07:45.078 "base_bdevs_list": [ 00:07:45.078 { 00:07:45.078 "name": "BaseBdev1", 00:07:45.078 "uuid": "193a48a6-2fcb-5ccc-8410-1e0a50ae62b8", 00:07:45.078 "is_configured": true, 00:07:45.078 "data_offset": 2048, 00:07:45.078 "data_size": 63488 00:07:45.078 }, 00:07:45.078 { 00:07:45.078 "name": "BaseBdev2", 00:07:45.078 "uuid": "241b36e0-1ddd-59db-91b8-bc9aacf6a505", 00:07:45.078 "is_configured": true, 00:07:45.078 "data_offset": 2048, 00:07:45.078 "data_size": 63488 00:07:45.078 } 00:07:45.078 ] 00:07:45.078 }' 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.078 01:27:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.338 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.338 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.338 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.338 [2024-10-09 01:27:44.157062] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.338 [2024-10-09 01:27:44.157111] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.338 [2024-10-09 01:27:44.159566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.338 [2024-10-09 01:27:44.159633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.338 [2024-10-09 01:27:44.159666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.338 [2024-10-09 01:27:44.159678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:45.338 { 00:07:45.338 "results": [ 00:07:45.338 { 00:07:45.338 "job": "raid_bdev1", 00:07:45.338 "core_mask": "0x1", 00:07:45.338 "workload": "randrw", 00:07:45.338 "percentage": 50, 00:07:45.338 "status": "finished", 00:07:45.338 "queue_depth": 1, 00:07:45.338 "io_size": 131072, 00:07:45.338 "runtime": 1.356846, 00:07:45.338 "iops": 15463.066552873355, 00:07:45.338 "mibps": 1932.8833191091694, 00:07:45.338 "io_failed": 1, 00:07:45.338 "io_timeout": 0, 00:07:45.338 "avg_latency_us": 90.65714389234262, 00:07:45.338 "min_latency_us": 24.321450361718817, 00:07:45.338 "max_latency_us": 1328.085069293123 00:07:45.338 } 00:07:45.338 ], 00:07:45.338 "core_count": 1 00:07:45.338 } 00:07:45.338 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.338 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74813 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74813 ']' 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74813 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74813 00:07:45.339 killing process with pid 74813 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74813' 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74813 00:07:45.339 [2024-10-09 01:27:44.198798] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.339 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74813 00:07:45.339 [2024-10-09 01:27:44.226860] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YsCrw0MyPi 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:45.909 00:07:45.909 real 0m3.380s 00:07:45.909 user 0m4.154s 00:07:45.909 sys 0m0.589s 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.909 01:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.909 ************************************ 00:07:45.909 END TEST raid_read_error_test 00:07:45.909 ************************************ 00:07:45.909 01:27:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:45.909 01:27:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.909 01:27:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.909 01:27:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.909 ************************************ 00:07:45.909 START TEST raid_write_error_test 00:07:45.909 ************************************ 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FvBDytaT0j 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74942 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74942 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74942 ']' 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.909 01:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.909 [2024-10-09 01:27:44.792258] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:45.909 [2024-10-09 01:27:44.792427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74942 ] 00:07:46.169 [2024-10-09 01:27:44.928869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.169 [2024-10-09 01:27:44.955241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.169 [2024-10-09 01:27:45.027028] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.429 [2024-10-09 01:27:45.105056] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.429 [2024-10-09 01:27:45.105102] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 BaseBdev1_malloc 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 true 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 [2024-10-09 01:27:45.640008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.999 [2024-10-09 01:27:45.640082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.999 [2024-10-09 01:27:45.640109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:46.999 [2024-10-09 01:27:45.640132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.999 [2024-10-09 01:27:45.642601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.999 [2024-10-09 01:27:45.642633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.999 BaseBdev1 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 BaseBdev2_malloc 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 true 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 [2024-10-09 01:27:45.698102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:46.999 [2024-10-09 01:27:45.698173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.999 [2024-10-09 01:27:45.698193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:46.999 [2024-10-09 01:27:45.698204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.999 [2024-10-09 01:27:45.700597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.999 [2024-10-09 01:27:45.700635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:46.999 BaseBdev2 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 [2024-10-09 01:27:45.710171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.999 [2024-10-09 01:27:45.712275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.999 [2024-10-09 01:27:45.712494] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.999 [2024-10-09 01:27:45.712510] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.999 [2024-10-09 01:27:45.712834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:46.999 [2024-10-09 01:27:45.713006] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.999 [2024-10-09 01:27:45.713021] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:46.999 [2024-10-09 01:27:45.713197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.999 "name": "raid_bdev1", 00:07:46.999 "uuid": "ac2b58bd-0f14-4878-a769-0cf51d5a056e", 00:07:46.999 "strip_size_kb": 64, 00:07:46.999 "state": "online", 00:07:46.999 "raid_level": "concat", 00:07:46.999 "superblock": true, 00:07:46.999 "num_base_bdevs": 2, 00:07:46.999 "num_base_bdevs_discovered": 2, 00:07:46.999 "num_base_bdevs_operational": 2, 00:07:46.999 "base_bdevs_list": [ 00:07:46.999 { 00:07:46.999 "name": "BaseBdev1", 00:07:46.999 "uuid": "08a10af3-2e0b-5472-91d4-8f16b21c4e26", 00:07:46.999 "is_configured": true, 00:07:46.999 "data_offset": 2048, 00:07:46.999 "data_size": 63488 00:07:46.999 }, 00:07:46.999 { 00:07:46.999 "name": "BaseBdev2", 00:07:46.999 "uuid": "21720754-b142-5385-afa1-e43de7bfc8c5", 00:07:46.999 "is_configured": true, 00:07:46.999 "data_offset": 2048, 00:07:46.999 "data_size": 63488 00:07:46.999 } 00:07:46.999 ] 00:07:46.999 }' 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.999 01:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.567 01:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:47.567 01:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:47.567 [2024-10-09 01:27:46.298762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.508 "name": "raid_bdev1", 00:07:48.508 "uuid": "ac2b58bd-0f14-4878-a769-0cf51d5a056e", 00:07:48.508 "strip_size_kb": 64, 00:07:48.508 "state": "online", 00:07:48.508 "raid_level": "concat", 00:07:48.508 "superblock": true, 00:07:48.508 "num_base_bdevs": 2, 00:07:48.508 "num_base_bdevs_discovered": 2, 00:07:48.508 "num_base_bdevs_operational": 2, 00:07:48.508 "base_bdevs_list": [ 00:07:48.508 { 00:07:48.508 "name": "BaseBdev1", 00:07:48.508 "uuid": "08a10af3-2e0b-5472-91d4-8f16b21c4e26", 00:07:48.508 "is_configured": true, 00:07:48.508 "data_offset": 2048, 00:07:48.508 "data_size": 63488 00:07:48.508 }, 00:07:48.508 { 00:07:48.508 "name": "BaseBdev2", 00:07:48.508 "uuid": "21720754-b142-5385-afa1-e43de7bfc8c5", 00:07:48.508 "is_configured": true, 00:07:48.508 "data_offset": 2048, 00:07:48.508 "data_size": 63488 00:07:48.508 } 00:07:48.508 ] 00:07:48.508 }' 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.508 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.078 [2024-10-09 01:27:47.690698] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.078 [2024-10-09 01:27:47.690744] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.078 [2024-10-09 01:27:47.693227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.078 [2024-10-09 01:27:47.693285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.078 [2024-10-09 01:27:47.693321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.078 [2024-10-09 01:27:47.693333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:49.078 { 00:07:49.078 "results": [ 00:07:49.078 { 00:07:49.078 "job": "raid_bdev1", 00:07:49.078 "core_mask": "0x1", 00:07:49.078 "workload": "randrw", 00:07:49.078 "percentage": 50, 00:07:49.078 "status": "finished", 00:07:49.078 "queue_depth": 1, 00:07:49.078 "io_size": 131072, 00:07:49.078 "runtime": 1.389798, 00:07:49.078 "iops": 15559.815167384037, 00:07:49.078 "mibps": 1944.9768959230046, 00:07:49.078 "io_failed": 1, 00:07:49.078 "io_timeout": 0, 00:07:49.078 "avg_latency_us": 90.06423946034917, 00:07:49.078 "min_latency_us": 24.43301664778175, 00:07:49.078 "max_latency_us": 1349.5057962172057 00:07:49.078 } 00:07:49.078 ], 00:07:49.078 "core_count": 1 00:07:49.078 } 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74942 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74942 ']' 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74942 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74942 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.078 killing process with pid 74942 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74942' 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74942 00:07:49.078 [2024-10-09 01:27:47.736383] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.078 01:27:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74942 00:07:49.078 [2024-10-09 01:27:47.765255] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FvBDytaT0j 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:49.338 00:07:49.338 real 0m3.459s 00:07:49.338 user 0m4.228s 00:07:49.338 sys 0m0.661s 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.338 01:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.338 ************************************ 00:07:49.338 END TEST raid_write_error_test 00:07:49.338 ************************************ 00:07:49.338 01:27:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:49.338 01:27:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:49.338 01:27:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:49.338 01:27:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.338 01:27:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.338 ************************************ 00:07:49.338 START TEST raid_state_function_test 00:07:49.338 ************************************ 00:07:49.338 01:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:49.338 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:49.338 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.338 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:49.338 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.338 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:49.339 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75069 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75069' 00:07:49.599 Process raid pid: 75069 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75069 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75069 ']' 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.599 01:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.599 [2024-10-09 01:27:48.316460] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:49.599 [2024-10-09 01:27:48.316606] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.599 [2024-10-09 01:27:48.453995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.599 [2024-10-09 01:27:48.480785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.859 [2024-10-09 01:27:48.564282] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.859 [2024-10-09 01:27:48.644054] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.859 [2024-10-09 01:27:48.644094] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.428 [2024-10-09 01:27:49.142246] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.428 [2024-10-09 01:27:49.142309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.428 [2024-10-09 01:27:49.142324] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.428 [2024-10-09 01:27:49.142331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.428 "name": "Existed_Raid", 00:07:50.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.428 "strip_size_kb": 0, 00:07:50.428 "state": "configuring", 00:07:50.428 "raid_level": "raid1", 00:07:50.428 "superblock": false, 00:07:50.428 "num_base_bdevs": 2, 00:07:50.428 "num_base_bdevs_discovered": 0, 00:07:50.428 "num_base_bdevs_operational": 2, 00:07:50.428 "base_bdevs_list": [ 00:07:50.428 { 00:07:50.428 "name": "BaseBdev1", 00:07:50.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.428 "is_configured": false, 00:07:50.428 "data_offset": 0, 00:07:50.428 "data_size": 0 00:07:50.428 }, 00:07:50.428 { 00:07:50.428 "name": "BaseBdev2", 00:07:50.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.428 "is_configured": false, 00:07:50.428 "data_offset": 0, 00:07:50.428 "data_size": 0 00:07:50.428 } 00:07:50.428 ] 00:07:50.428 }' 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.428 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.688 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.688 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.688 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.688 [2024-10-09 01:27:49.578220] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.688 [2024-10-09 01:27:49.578267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 [2024-10-09 01:27:49.586216] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.958 [2024-10-09 01:27:49.586255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.958 [2024-10-09 01:27:49.586267] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.958 [2024-10-09 01:27:49.586274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 [2024-10-09 01:27:49.609336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.958 BaseBdev1 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 [ 00:07:50.958 { 00:07:50.958 "name": "BaseBdev1", 00:07:50.958 "aliases": [ 00:07:50.958 "2f5abe43-3ec6-4715-82f7-43fd0ec9b6bc" 00:07:50.958 ], 00:07:50.958 "product_name": "Malloc disk", 00:07:50.958 "block_size": 512, 00:07:50.958 "num_blocks": 65536, 00:07:50.958 "uuid": "2f5abe43-3ec6-4715-82f7-43fd0ec9b6bc", 00:07:50.958 "assigned_rate_limits": { 00:07:50.958 "rw_ios_per_sec": 0, 00:07:50.958 "rw_mbytes_per_sec": 0, 00:07:50.958 "r_mbytes_per_sec": 0, 00:07:50.958 "w_mbytes_per_sec": 0 00:07:50.958 }, 00:07:50.958 "claimed": true, 00:07:50.958 "claim_type": "exclusive_write", 00:07:50.958 "zoned": false, 00:07:50.958 "supported_io_types": { 00:07:50.958 "read": true, 00:07:50.958 "write": true, 00:07:50.958 "unmap": true, 00:07:50.958 "flush": true, 00:07:50.958 "reset": true, 00:07:50.958 "nvme_admin": false, 00:07:50.958 "nvme_io": false, 00:07:50.958 "nvme_io_md": false, 00:07:50.958 "write_zeroes": true, 00:07:50.958 "zcopy": true, 00:07:50.958 "get_zone_info": false, 00:07:50.958 "zone_management": false, 00:07:50.958 "zone_append": false, 00:07:50.958 "compare": false, 00:07:50.958 "compare_and_write": false, 00:07:50.958 "abort": true, 00:07:50.958 "seek_hole": false, 00:07:50.958 "seek_data": false, 00:07:50.958 "copy": true, 00:07:50.958 "nvme_iov_md": false 00:07:50.958 }, 00:07:50.958 "memory_domains": [ 00:07:50.958 { 00:07:50.958 "dma_device_id": "system", 00:07:50.958 "dma_device_type": 1 00:07:50.958 }, 00:07:50.958 { 00:07:50.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.958 "dma_device_type": 2 00:07:50.958 } 00:07:50.958 ], 00:07:50.958 "driver_specific": {} 00:07:50.958 } 00:07:50.958 ] 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.958 "name": "Existed_Raid", 00:07:50.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.958 "strip_size_kb": 0, 00:07:50.958 "state": "configuring", 00:07:50.958 "raid_level": "raid1", 00:07:50.958 "superblock": false, 00:07:50.958 "num_base_bdevs": 2, 00:07:50.958 "num_base_bdevs_discovered": 1, 00:07:50.958 "num_base_bdevs_operational": 2, 00:07:50.958 "base_bdevs_list": [ 00:07:50.958 { 00:07:50.958 "name": "BaseBdev1", 00:07:50.958 "uuid": "2f5abe43-3ec6-4715-82f7-43fd0ec9b6bc", 00:07:50.958 "is_configured": true, 00:07:50.958 "data_offset": 0, 00:07:50.958 "data_size": 65536 00:07:50.958 }, 00:07:50.958 { 00:07:50.958 "name": "BaseBdev2", 00:07:50.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.958 "is_configured": false, 00:07:50.958 "data_offset": 0, 00:07:50.958 "data_size": 0 00:07:50.958 } 00:07:50.958 ] 00:07:50.958 }' 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.958 01:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.543 [2024-10-09 01:27:50.137543] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.543 [2024-10-09 01:27:50.137597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.543 [2024-10-09 01:27:50.149510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.543 [2024-10-09 01:27:50.151551] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.543 [2024-10-09 01:27:50.151583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.543 "name": "Existed_Raid", 00:07:51.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.543 "strip_size_kb": 0, 00:07:51.543 "state": "configuring", 00:07:51.543 "raid_level": "raid1", 00:07:51.543 "superblock": false, 00:07:51.543 "num_base_bdevs": 2, 00:07:51.543 "num_base_bdevs_discovered": 1, 00:07:51.543 "num_base_bdevs_operational": 2, 00:07:51.543 "base_bdevs_list": [ 00:07:51.543 { 00:07:51.543 "name": "BaseBdev1", 00:07:51.543 "uuid": "2f5abe43-3ec6-4715-82f7-43fd0ec9b6bc", 00:07:51.543 "is_configured": true, 00:07:51.543 "data_offset": 0, 00:07:51.543 "data_size": 65536 00:07:51.543 }, 00:07:51.543 { 00:07:51.543 "name": "BaseBdev2", 00:07:51.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.543 "is_configured": false, 00:07:51.543 "data_offset": 0, 00:07:51.543 "data_size": 0 00:07:51.543 } 00:07:51.543 ] 00:07:51.543 }' 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.543 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.803 [2024-10-09 01:27:50.631954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.803 [2024-10-09 01:27:50.632081] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:51.803 [2024-10-09 01:27:50.632162] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:51.803 [2024-10-09 01:27:50.633193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:51.803 [2024-10-09 01:27:50.633725] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:51.803 [2024-10-09 01:27:50.633785] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:51.803 BaseBdev2 00:07:51.803 [2024-10-09 01:27:50.634481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.803 [ 00:07:51.803 { 00:07:51.803 "name": "BaseBdev2", 00:07:51.803 "aliases": [ 00:07:51.803 "4b0c8e70-add1-4fa6-8223-2d1b16694884" 00:07:51.803 ], 00:07:51.803 "product_name": "Malloc disk", 00:07:51.803 "block_size": 512, 00:07:51.803 "num_blocks": 65536, 00:07:51.803 "uuid": "4b0c8e70-add1-4fa6-8223-2d1b16694884", 00:07:51.803 "assigned_rate_limits": { 00:07:51.803 "rw_ios_per_sec": 0, 00:07:51.803 "rw_mbytes_per_sec": 0, 00:07:51.803 "r_mbytes_per_sec": 0, 00:07:51.803 "w_mbytes_per_sec": 0 00:07:51.803 }, 00:07:51.803 "claimed": true, 00:07:51.803 "claim_type": "exclusive_write", 00:07:51.803 "zoned": false, 00:07:51.803 "supported_io_types": { 00:07:51.803 "read": true, 00:07:51.803 "write": true, 00:07:51.803 "unmap": true, 00:07:51.803 "flush": true, 00:07:51.803 "reset": true, 00:07:51.803 "nvme_admin": false, 00:07:51.803 "nvme_io": false, 00:07:51.803 "nvme_io_md": false, 00:07:51.803 "write_zeroes": true, 00:07:51.803 "zcopy": true, 00:07:51.803 "get_zone_info": false, 00:07:51.803 "zone_management": false, 00:07:51.803 "zone_append": false, 00:07:51.803 "compare": false, 00:07:51.803 "compare_and_write": false, 00:07:51.803 "abort": true, 00:07:51.803 "seek_hole": false, 00:07:51.803 "seek_data": false, 00:07:51.803 "copy": true, 00:07:51.803 "nvme_iov_md": false 00:07:51.803 }, 00:07:51.803 "memory_domains": [ 00:07:51.803 { 00:07:51.803 "dma_device_id": "system", 00:07:51.803 "dma_device_type": 1 00:07:51.803 }, 00:07:51.803 { 00:07:51.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.803 "dma_device_type": 2 00:07:51.803 } 00:07:51.803 ], 00:07:51.803 "driver_specific": {} 00:07:51.803 } 00:07:51.803 ] 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.803 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.804 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.804 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.804 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.804 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.804 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.063 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.063 "name": "Existed_Raid", 00:07:52.063 "uuid": "0ae65bf5-1f0b-4bc2-ba86-9046fff73cca", 00:07:52.063 "strip_size_kb": 0, 00:07:52.063 "state": "online", 00:07:52.063 "raid_level": "raid1", 00:07:52.063 "superblock": false, 00:07:52.063 "num_base_bdevs": 2, 00:07:52.063 "num_base_bdevs_discovered": 2, 00:07:52.063 "num_base_bdevs_operational": 2, 00:07:52.063 "base_bdevs_list": [ 00:07:52.063 { 00:07:52.063 "name": "BaseBdev1", 00:07:52.063 "uuid": "2f5abe43-3ec6-4715-82f7-43fd0ec9b6bc", 00:07:52.063 "is_configured": true, 00:07:52.063 "data_offset": 0, 00:07:52.063 "data_size": 65536 00:07:52.063 }, 00:07:52.063 { 00:07:52.063 "name": "BaseBdev2", 00:07:52.063 "uuid": "4b0c8e70-add1-4fa6-8223-2d1b16694884", 00:07:52.063 "is_configured": true, 00:07:52.063 "data_offset": 0, 00:07:52.063 "data_size": 65536 00:07:52.063 } 00:07:52.063 ] 00:07:52.063 }' 00:07:52.063 01:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.063 01:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.322 [2024-10-09 01:27:51.172354] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.322 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.322 "name": "Existed_Raid", 00:07:52.322 "aliases": [ 00:07:52.322 "0ae65bf5-1f0b-4bc2-ba86-9046fff73cca" 00:07:52.322 ], 00:07:52.322 "product_name": "Raid Volume", 00:07:52.322 "block_size": 512, 00:07:52.322 "num_blocks": 65536, 00:07:52.322 "uuid": "0ae65bf5-1f0b-4bc2-ba86-9046fff73cca", 00:07:52.322 "assigned_rate_limits": { 00:07:52.322 "rw_ios_per_sec": 0, 00:07:52.322 "rw_mbytes_per_sec": 0, 00:07:52.322 "r_mbytes_per_sec": 0, 00:07:52.322 "w_mbytes_per_sec": 0 00:07:52.322 }, 00:07:52.322 "claimed": false, 00:07:52.322 "zoned": false, 00:07:52.322 "supported_io_types": { 00:07:52.322 "read": true, 00:07:52.322 "write": true, 00:07:52.322 "unmap": false, 00:07:52.322 "flush": false, 00:07:52.322 "reset": true, 00:07:52.322 "nvme_admin": false, 00:07:52.322 "nvme_io": false, 00:07:52.322 "nvme_io_md": false, 00:07:52.322 "write_zeroes": true, 00:07:52.322 "zcopy": false, 00:07:52.322 "get_zone_info": false, 00:07:52.322 "zone_management": false, 00:07:52.322 "zone_append": false, 00:07:52.322 "compare": false, 00:07:52.322 "compare_and_write": false, 00:07:52.322 "abort": false, 00:07:52.322 "seek_hole": false, 00:07:52.322 "seek_data": false, 00:07:52.322 "copy": false, 00:07:52.322 "nvme_iov_md": false 00:07:52.322 }, 00:07:52.322 "memory_domains": [ 00:07:52.322 { 00:07:52.322 "dma_device_id": "system", 00:07:52.322 "dma_device_type": 1 00:07:52.322 }, 00:07:52.322 { 00:07:52.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.322 "dma_device_type": 2 00:07:52.322 }, 00:07:52.322 { 00:07:52.322 "dma_device_id": "system", 00:07:52.322 "dma_device_type": 1 00:07:52.322 }, 00:07:52.322 { 00:07:52.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.322 "dma_device_type": 2 00:07:52.322 } 00:07:52.322 ], 00:07:52.322 "driver_specific": { 00:07:52.322 "raid": { 00:07:52.322 "uuid": "0ae65bf5-1f0b-4bc2-ba86-9046fff73cca", 00:07:52.322 "strip_size_kb": 0, 00:07:52.322 "state": "online", 00:07:52.322 "raid_level": "raid1", 00:07:52.322 "superblock": false, 00:07:52.322 "num_base_bdevs": 2, 00:07:52.322 "num_base_bdevs_discovered": 2, 00:07:52.322 "num_base_bdevs_operational": 2, 00:07:52.322 "base_bdevs_list": [ 00:07:52.322 { 00:07:52.322 "name": "BaseBdev1", 00:07:52.322 "uuid": "2f5abe43-3ec6-4715-82f7-43fd0ec9b6bc", 00:07:52.322 "is_configured": true, 00:07:52.322 "data_offset": 0, 00:07:52.322 "data_size": 65536 00:07:52.322 }, 00:07:52.322 { 00:07:52.322 "name": "BaseBdev2", 00:07:52.322 "uuid": "4b0c8e70-add1-4fa6-8223-2d1b16694884", 00:07:52.323 "is_configured": true, 00:07:52.323 "data_offset": 0, 00:07:52.323 "data_size": 65536 00:07:52.323 } 00:07:52.323 ] 00:07:52.323 } 00:07:52.323 } 00:07:52.323 }' 00:07:52.323 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.583 BaseBdev2' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.583 [2024-10-09 01:27:51.412133] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.583 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.843 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.843 "name": "Existed_Raid", 00:07:52.843 "uuid": "0ae65bf5-1f0b-4bc2-ba86-9046fff73cca", 00:07:52.843 "strip_size_kb": 0, 00:07:52.843 "state": "online", 00:07:52.843 "raid_level": "raid1", 00:07:52.843 "superblock": false, 00:07:52.843 "num_base_bdevs": 2, 00:07:52.843 "num_base_bdevs_discovered": 1, 00:07:52.843 "num_base_bdevs_operational": 1, 00:07:52.843 "base_bdevs_list": [ 00:07:52.843 { 00:07:52.843 "name": null, 00:07:52.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.843 "is_configured": false, 00:07:52.843 "data_offset": 0, 00:07:52.843 "data_size": 65536 00:07:52.843 }, 00:07:52.843 { 00:07:52.843 "name": "BaseBdev2", 00:07:52.843 "uuid": "4b0c8e70-add1-4fa6-8223-2d1b16694884", 00:07:52.843 "is_configured": true, 00:07:52.843 "data_offset": 0, 00:07:52.843 "data_size": 65536 00:07:52.843 } 00:07:52.843 ] 00:07:52.843 }' 00:07:52.843 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.843 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.103 [2024-10-09 01:27:51.861147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.103 [2024-10-09 01:27:51.861272] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.103 [2024-10-09 01:27:51.882845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.103 [2024-10-09 01:27:51.882910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.103 [2024-10-09 01:27:51.882922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75069 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75069 ']' 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75069 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75069 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.103 killing process with pid 75069 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75069' 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75069 00:07:53.103 [2024-10-09 01:27:51.980537] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.103 01:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75069 00:07:53.103 [2024-10-09 01:27:51.982189] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:53.673 00:07:53.673 real 0m4.139s 00:07:53.673 user 0m6.328s 00:07:53.673 sys 0m0.909s 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.673 ************************************ 00:07:53.673 END TEST raid_state_function_test 00:07:53.673 ************************************ 00:07:53.673 01:27:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:53.673 01:27:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:53.673 01:27:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.673 01:27:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.673 ************************************ 00:07:53.673 START TEST raid_state_function_test_sb 00:07:53.673 ************************************ 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75311 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75311' 00:07:53.673 Process raid pid: 75311 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75311 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75311 ']' 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.673 01:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.673 [2024-10-09 01:27:52.541216] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:53.673 [2024-10-09 01:27:52.541356] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.933 [2024-10-09 01:27:52.679902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.934 [2024-10-09 01:27:52.694662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.934 [2024-10-09 01:27:52.766132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.193 [2024-10-09 01:27:52.842764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.193 [2024-10-09 01:27:52.842811] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.763 [2024-10-09 01:27:53.363348] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:54.763 [2024-10-09 01:27:53.363398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:54.763 [2024-10-09 01:27:53.363409] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.763 [2024-10-09 01:27:53.363416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.763 "name": "Existed_Raid", 00:07:54.763 "uuid": "191f8ddc-db8f-401a-b36b-9dcb07cdbcd6", 00:07:54.763 "strip_size_kb": 0, 00:07:54.763 "state": "configuring", 00:07:54.763 "raid_level": "raid1", 00:07:54.763 "superblock": true, 00:07:54.763 "num_base_bdevs": 2, 00:07:54.763 "num_base_bdevs_discovered": 0, 00:07:54.763 "num_base_bdevs_operational": 2, 00:07:54.763 "base_bdevs_list": [ 00:07:54.763 { 00:07:54.763 "name": "BaseBdev1", 00:07:54.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.763 "is_configured": false, 00:07:54.763 "data_offset": 0, 00:07:54.763 "data_size": 0 00:07:54.763 }, 00:07:54.763 { 00:07:54.763 "name": "BaseBdev2", 00:07:54.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.763 "is_configured": false, 00:07:54.763 "data_offset": 0, 00:07:54.763 "data_size": 0 00:07:54.763 } 00:07:54.763 ] 00:07:54.763 }' 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.763 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.023 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.023 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.024 [2024-10-09 01:27:53.847486] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.024 [2024-10-09 01:27:53.847554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.024 [2024-10-09 01:27:53.855411] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.024 [2024-10-09 01:27:53.855447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.024 [2024-10-09 01:27:53.855458] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.024 [2024-10-09 01:27:53.855464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.024 [2024-10-09 01:27:53.878497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.024 BaseBdev1 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.024 [ 00:07:55.024 { 00:07:55.024 "name": "BaseBdev1", 00:07:55.024 "aliases": [ 00:07:55.024 "217cdbc2-f4a6-4b1e-8e5d-e551f8d425e2" 00:07:55.024 ], 00:07:55.024 "product_name": "Malloc disk", 00:07:55.024 "block_size": 512, 00:07:55.024 "num_blocks": 65536, 00:07:55.024 "uuid": "217cdbc2-f4a6-4b1e-8e5d-e551f8d425e2", 00:07:55.024 "assigned_rate_limits": { 00:07:55.024 "rw_ios_per_sec": 0, 00:07:55.024 "rw_mbytes_per_sec": 0, 00:07:55.024 "r_mbytes_per_sec": 0, 00:07:55.024 "w_mbytes_per_sec": 0 00:07:55.024 }, 00:07:55.024 "claimed": true, 00:07:55.024 "claim_type": "exclusive_write", 00:07:55.024 "zoned": false, 00:07:55.024 "supported_io_types": { 00:07:55.024 "read": true, 00:07:55.024 "write": true, 00:07:55.024 "unmap": true, 00:07:55.024 "flush": true, 00:07:55.024 "reset": true, 00:07:55.024 "nvme_admin": false, 00:07:55.024 "nvme_io": false, 00:07:55.024 "nvme_io_md": false, 00:07:55.024 "write_zeroes": true, 00:07:55.024 "zcopy": true, 00:07:55.024 "get_zone_info": false, 00:07:55.024 "zone_management": false, 00:07:55.024 "zone_append": false, 00:07:55.024 "compare": false, 00:07:55.024 "compare_and_write": false, 00:07:55.024 "abort": true, 00:07:55.024 "seek_hole": false, 00:07:55.024 "seek_data": false, 00:07:55.024 "copy": true, 00:07:55.024 "nvme_iov_md": false 00:07:55.024 }, 00:07:55.024 "memory_domains": [ 00:07:55.024 { 00:07:55.024 "dma_device_id": "system", 00:07:55.024 "dma_device_type": 1 00:07:55.024 }, 00:07:55.024 { 00:07:55.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.024 "dma_device_type": 2 00:07:55.024 } 00:07:55.024 ], 00:07:55.024 "driver_specific": {} 00:07:55.024 } 00:07:55.024 ] 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:55.024 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.283 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.283 "name": "Existed_Raid", 00:07:55.283 "uuid": "ed18ffc9-958c-44e4-8e6c-a2885b9ec20f", 00:07:55.283 "strip_size_kb": 0, 00:07:55.284 "state": "configuring", 00:07:55.284 "raid_level": "raid1", 00:07:55.284 "superblock": true, 00:07:55.284 "num_base_bdevs": 2, 00:07:55.284 "num_base_bdevs_discovered": 1, 00:07:55.284 "num_base_bdevs_operational": 2, 00:07:55.284 "base_bdevs_list": [ 00:07:55.284 { 00:07:55.284 "name": "BaseBdev1", 00:07:55.284 "uuid": "217cdbc2-f4a6-4b1e-8e5d-e551f8d425e2", 00:07:55.284 "is_configured": true, 00:07:55.284 "data_offset": 2048, 00:07:55.284 "data_size": 63488 00:07:55.284 }, 00:07:55.284 { 00:07:55.284 "name": "BaseBdev2", 00:07:55.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.284 "is_configured": false, 00:07:55.284 "data_offset": 0, 00:07:55.284 "data_size": 0 00:07:55.284 } 00:07:55.284 ] 00:07:55.284 }' 00:07:55.284 01:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.284 01:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.544 [2024-10-09 01:27:54.318695] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.544 [2024-10-09 01:27:54.318779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.544 [2024-10-09 01:27:54.326670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.544 [2024-10-09 01:27:54.328727] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.544 [2024-10-09 01:27:54.328764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.544 "name": "Existed_Raid", 00:07:55.544 "uuid": "9fe136c2-51f9-47e9-ab7d-c62e85033f9e", 00:07:55.544 "strip_size_kb": 0, 00:07:55.544 "state": "configuring", 00:07:55.544 "raid_level": "raid1", 00:07:55.544 "superblock": true, 00:07:55.544 "num_base_bdevs": 2, 00:07:55.544 "num_base_bdevs_discovered": 1, 00:07:55.544 "num_base_bdevs_operational": 2, 00:07:55.544 "base_bdevs_list": [ 00:07:55.544 { 00:07:55.544 "name": "BaseBdev1", 00:07:55.544 "uuid": "217cdbc2-f4a6-4b1e-8e5d-e551f8d425e2", 00:07:55.544 "is_configured": true, 00:07:55.544 "data_offset": 2048, 00:07:55.544 "data_size": 63488 00:07:55.544 }, 00:07:55.544 { 00:07:55.544 "name": "BaseBdev2", 00:07:55.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.544 "is_configured": false, 00:07:55.544 "data_offset": 0, 00:07:55.544 "data_size": 0 00:07:55.544 } 00:07:55.544 ] 00:07:55.544 }' 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.544 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.113 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:56.113 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.113 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.113 [2024-10-09 01:27:54.771772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.113 [2024-10-09 01:27:54.771997] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.113 [2024-10-09 01:27:54.772022] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.113 [2024-10-09 01:27:54.772383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:56.113 BaseBdev2 00:07:56.113 [2024-10-09 01:27:54.772585] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.113 [2024-10-09 01:27:54.772605] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:56.113 [2024-10-09 01:27:54.772759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.113 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.113 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:56.113 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:56.113 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:56.113 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.114 [ 00:07:56.114 { 00:07:56.114 "name": "BaseBdev2", 00:07:56.114 "aliases": [ 00:07:56.114 "54e9264a-02d9-4372-9138-07d428ed4ab7" 00:07:56.114 ], 00:07:56.114 "product_name": "Malloc disk", 00:07:56.114 "block_size": 512, 00:07:56.114 "num_blocks": 65536, 00:07:56.114 "uuid": "54e9264a-02d9-4372-9138-07d428ed4ab7", 00:07:56.114 "assigned_rate_limits": { 00:07:56.114 "rw_ios_per_sec": 0, 00:07:56.114 "rw_mbytes_per_sec": 0, 00:07:56.114 "r_mbytes_per_sec": 0, 00:07:56.114 "w_mbytes_per_sec": 0 00:07:56.114 }, 00:07:56.114 "claimed": true, 00:07:56.114 "claim_type": "exclusive_write", 00:07:56.114 "zoned": false, 00:07:56.114 "supported_io_types": { 00:07:56.114 "read": true, 00:07:56.114 "write": true, 00:07:56.114 "unmap": true, 00:07:56.114 "flush": true, 00:07:56.114 "reset": true, 00:07:56.114 "nvme_admin": false, 00:07:56.114 "nvme_io": false, 00:07:56.114 "nvme_io_md": false, 00:07:56.114 "write_zeroes": true, 00:07:56.114 "zcopy": true, 00:07:56.114 "get_zone_info": false, 00:07:56.114 "zone_management": false, 00:07:56.114 "zone_append": false, 00:07:56.114 "compare": false, 00:07:56.114 "compare_and_write": false, 00:07:56.114 "abort": true, 00:07:56.114 "seek_hole": false, 00:07:56.114 "seek_data": false, 00:07:56.114 "copy": true, 00:07:56.114 "nvme_iov_md": false 00:07:56.114 }, 00:07:56.114 "memory_domains": [ 00:07:56.114 { 00:07:56.114 "dma_device_id": "system", 00:07:56.114 "dma_device_type": 1 00:07:56.114 }, 00:07:56.114 { 00:07:56.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.114 "dma_device_type": 2 00:07:56.114 } 00:07:56.114 ], 00:07:56.114 "driver_specific": {} 00:07:56.114 } 00:07:56.114 ] 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.114 "name": "Existed_Raid", 00:07:56.114 "uuid": "9fe136c2-51f9-47e9-ab7d-c62e85033f9e", 00:07:56.114 "strip_size_kb": 0, 00:07:56.114 "state": "online", 00:07:56.114 "raid_level": "raid1", 00:07:56.114 "superblock": true, 00:07:56.114 "num_base_bdevs": 2, 00:07:56.114 "num_base_bdevs_discovered": 2, 00:07:56.114 "num_base_bdevs_operational": 2, 00:07:56.114 "base_bdevs_list": [ 00:07:56.114 { 00:07:56.114 "name": "BaseBdev1", 00:07:56.114 "uuid": "217cdbc2-f4a6-4b1e-8e5d-e551f8d425e2", 00:07:56.114 "is_configured": true, 00:07:56.114 "data_offset": 2048, 00:07:56.114 "data_size": 63488 00:07:56.114 }, 00:07:56.114 { 00:07:56.114 "name": "BaseBdev2", 00:07:56.114 "uuid": "54e9264a-02d9-4372-9138-07d428ed4ab7", 00:07:56.114 "is_configured": true, 00:07:56.114 "data_offset": 2048, 00:07:56.114 "data_size": 63488 00:07:56.114 } 00:07:56.114 ] 00:07:56.114 }' 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.114 01:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.684 [2024-10-09 01:27:55.284229] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.684 "name": "Existed_Raid", 00:07:56.684 "aliases": [ 00:07:56.684 "9fe136c2-51f9-47e9-ab7d-c62e85033f9e" 00:07:56.684 ], 00:07:56.684 "product_name": "Raid Volume", 00:07:56.684 "block_size": 512, 00:07:56.684 "num_blocks": 63488, 00:07:56.684 "uuid": "9fe136c2-51f9-47e9-ab7d-c62e85033f9e", 00:07:56.684 "assigned_rate_limits": { 00:07:56.684 "rw_ios_per_sec": 0, 00:07:56.684 "rw_mbytes_per_sec": 0, 00:07:56.684 "r_mbytes_per_sec": 0, 00:07:56.684 "w_mbytes_per_sec": 0 00:07:56.684 }, 00:07:56.684 "claimed": false, 00:07:56.684 "zoned": false, 00:07:56.684 "supported_io_types": { 00:07:56.684 "read": true, 00:07:56.684 "write": true, 00:07:56.684 "unmap": false, 00:07:56.684 "flush": false, 00:07:56.684 "reset": true, 00:07:56.684 "nvme_admin": false, 00:07:56.684 "nvme_io": false, 00:07:56.684 "nvme_io_md": false, 00:07:56.684 "write_zeroes": true, 00:07:56.684 "zcopy": false, 00:07:56.684 "get_zone_info": false, 00:07:56.684 "zone_management": false, 00:07:56.684 "zone_append": false, 00:07:56.684 "compare": false, 00:07:56.684 "compare_and_write": false, 00:07:56.684 "abort": false, 00:07:56.684 "seek_hole": false, 00:07:56.684 "seek_data": false, 00:07:56.684 "copy": false, 00:07:56.684 "nvme_iov_md": false 00:07:56.684 }, 00:07:56.684 "memory_domains": [ 00:07:56.684 { 00:07:56.684 "dma_device_id": "system", 00:07:56.684 "dma_device_type": 1 00:07:56.684 }, 00:07:56.684 { 00:07:56.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.684 "dma_device_type": 2 00:07:56.684 }, 00:07:56.684 { 00:07:56.684 "dma_device_id": "system", 00:07:56.684 "dma_device_type": 1 00:07:56.684 }, 00:07:56.684 { 00:07:56.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.684 "dma_device_type": 2 00:07:56.684 } 00:07:56.684 ], 00:07:56.684 "driver_specific": { 00:07:56.684 "raid": { 00:07:56.684 "uuid": "9fe136c2-51f9-47e9-ab7d-c62e85033f9e", 00:07:56.684 "strip_size_kb": 0, 00:07:56.684 "state": "online", 00:07:56.684 "raid_level": "raid1", 00:07:56.684 "superblock": true, 00:07:56.684 "num_base_bdevs": 2, 00:07:56.684 "num_base_bdevs_discovered": 2, 00:07:56.684 "num_base_bdevs_operational": 2, 00:07:56.684 "base_bdevs_list": [ 00:07:56.684 { 00:07:56.684 "name": "BaseBdev1", 00:07:56.684 "uuid": "217cdbc2-f4a6-4b1e-8e5d-e551f8d425e2", 00:07:56.684 "is_configured": true, 00:07:56.684 "data_offset": 2048, 00:07:56.684 "data_size": 63488 00:07:56.684 }, 00:07:56.684 { 00:07:56.684 "name": "BaseBdev2", 00:07:56.684 "uuid": "54e9264a-02d9-4372-9138-07d428ed4ab7", 00:07:56.684 "is_configured": true, 00:07:56.684 "data_offset": 2048, 00:07:56.684 "data_size": 63488 00:07:56.684 } 00:07:56.684 ] 00:07:56.684 } 00:07:56.684 } 00:07:56.684 }' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:56.684 BaseBdev2' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 [2024-10-09 01:27:55.504139] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.684 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.944 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.944 "name": "Existed_Raid", 00:07:56.944 "uuid": "9fe136c2-51f9-47e9-ab7d-c62e85033f9e", 00:07:56.944 "strip_size_kb": 0, 00:07:56.944 "state": "online", 00:07:56.944 "raid_level": "raid1", 00:07:56.944 "superblock": true, 00:07:56.944 "num_base_bdevs": 2, 00:07:56.944 "num_base_bdevs_discovered": 1, 00:07:56.944 "num_base_bdevs_operational": 1, 00:07:56.944 "base_bdevs_list": [ 00:07:56.944 { 00:07:56.944 "name": null, 00:07:56.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.944 "is_configured": false, 00:07:56.944 "data_offset": 0, 00:07:56.944 "data_size": 63488 00:07:56.944 }, 00:07:56.944 { 00:07:56.944 "name": "BaseBdev2", 00:07:56.944 "uuid": "54e9264a-02d9-4372-9138-07d428ed4ab7", 00:07:56.944 "is_configured": true, 00:07:56.944 "data_offset": 2048, 00:07:56.944 "data_size": 63488 00:07:56.944 } 00:07:56.944 ] 00:07:56.944 }' 00:07:56.944 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.944 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.202 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:57.202 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.202 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.202 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.202 01:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:57.202 01:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.202 [2024-10-09 01:27:56.049072] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:57.202 [2024-10-09 01:27:56.049211] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.202 [2024-10-09 01:27:56.070384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.202 [2024-10-09 01:27:56.070446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.202 [2024-10-09 01:27:56.070455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.202 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75311 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75311 ']' 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75311 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75311 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.461 killing process with pid 75311 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75311' 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75311 00:07:57.461 [2024-10-09 01:27:56.166299] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.461 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75311 00:07:57.461 [2024-10-09 01:27:56.167862] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.721 01:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:57.721 00:07:57.721 real 0m4.099s 00:07:57.721 user 0m6.269s 00:07:57.721 sys 0m0.905s 00:07:57.721 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.721 01:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.721 ************************************ 00:07:57.721 END TEST raid_state_function_test_sb 00:07:57.721 ************************************ 00:07:57.721 01:27:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:57.721 01:27:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:57.721 01:27:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.721 01:27:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.721 ************************************ 00:07:57.721 START TEST raid_superblock_test 00:07:57.721 ************************************ 00:07:57.721 01:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:57.721 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:57.721 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:57.721 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:57.721 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75557 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75557 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75557 ']' 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.981 01:27:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.981 [2024-10-09 01:27:56.699792] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:57.982 [2024-10-09 01:27:56.699927] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75557 ] 00:07:57.982 [2024-10-09 01:27:56.832515] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.982 [2024-10-09 01:27:56.862153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.241 [2024-10-09 01:27:56.932161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.241 [2024-10-09 01:27:57.011482] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.242 [2024-10-09 01:27:57.011530] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.822 malloc1 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.822 [2024-10-09 01:27:57.554772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.822 [2024-10-09 01:27:57.554847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.822 [2024-10-09 01:27:57.554872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:58.822 [2024-10-09 01:27:57.554884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.822 [2024-10-09 01:27:57.557315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.822 [2024-10-09 01:27:57.557348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.822 pt1 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:58.822 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.823 malloc2 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.823 [2024-10-09 01:27:57.597991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.823 [2024-10-09 01:27:57.598057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.823 [2024-10-09 01:27:57.598079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:58.823 [2024-10-09 01:27:57.598087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.823 [2024-10-09 01:27:57.600524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.823 [2024-10-09 01:27:57.600566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.823 pt2 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.823 [2024-10-09 01:27:57.610073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.823 [2024-10-09 01:27:57.612149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.823 [2024-10-09 01:27:57.612423] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:58.823 [2024-10-09 01:27:57.612446] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.823 [2024-10-09 01:27:57.612775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:58.823 [2024-10-09 01:27:57.612923] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:58.823 [2024-10-09 01:27:57.612942] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:58.823 [2024-10-09 01:27:57.613100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.823 "name": "raid_bdev1", 00:07:58.823 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:07:58.823 "strip_size_kb": 0, 00:07:58.823 "state": "online", 00:07:58.823 "raid_level": "raid1", 00:07:58.823 "superblock": true, 00:07:58.823 "num_base_bdevs": 2, 00:07:58.823 "num_base_bdevs_discovered": 2, 00:07:58.823 "num_base_bdevs_operational": 2, 00:07:58.823 "base_bdevs_list": [ 00:07:58.823 { 00:07:58.823 "name": "pt1", 00:07:58.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.823 "is_configured": true, 00:07:58.823 "data_offset": 2048, 00:07:58.823 "data_size": 63488 00:07:58.823 }, 00:07:58.823 { 00:07:58.823 "name": "pt2", 00:07:58.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.823 "is_configured": true, 00:07:58.823 "data_offset": 2048, 00:07:58.823 "data_size": 63488 00:07:58.823 } 00:07:58.823 ] 00:07:58.823 }' 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.823 01:27:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.393 [2024-10-09 01:27:58.098481] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.393 "name": "raid_bdev1", 00:07:59.393 "aliases": [ 00:07:59.393 "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a" 00:07:59.393 ], 00:07:59.393 "product_name": "Raid Volume", 00:07:59.393 "block_size": 512, 00:07:59.393 "num_blocks": 63488, 00:07:59.393 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:07:59.393 "assigned_rate_limits": { 00:07:59.393 "rw_ios_per_sec": 0, 00:07:59.393 "rw_mbytes_per_sec": 0, 00:07:59.393 "r_mbytes_per_sec": 0, 00:07:59.393 "w_mbytes_per_sec": 0 00:07:59.393 }, 00:07:59.393 "claimed": false, 00:07:59.393 "zoned": false, 00:07:59.393 "supported_io_types": { 00:07:59.393 "read": true, 00:07:59.393 "write": true, 00:07:59.393 "unmap": false, 00:07:59.393 "flush": false, 00:07:59.393 "reset": true, 00:07:59.393 "nvme_admin": false, 00:07:59.393 "nvme_io": false, 00:07:59.393 "nvme_io_md": false, 00:07:59.393 "write_zeroes": true, 00:07:59.393 "zcopy": false, 00:07:59.393 "get_zone_info": false, 00:07:59.393 "zone_management": false, 00:07:59.393 "zone_append": false, 00:07:59.393 "compare": false, 00:07:59.393 "compare_and_write": false, 00:07:59.393 "abort": false, 00:07:59.393 "seek_hole": false, 00:07:59.393 "seek_data": false, 00:07:59.393 "copy": false, 00:07:59.393 "nvme_iov_md": false 00:07:59.393 }, 00:07:59.393 "memory_domains": [ 00:07:59.393 { 00:07:59.393 "dma_device_id": "system", 00:07:59.393 "dma_device_type": 1 00:07:59.393 }, 00:07:59.393 { 00:07:59.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.393 "dma_device_type": 2 00:07:59.393 }, 00:07:59.393 { 00:07:59.393 "dma_device_id": "system", 00:07:59.393 "dma_device_type": 1 00:07:59.393 }, 00:07:59.393 { 00:07:59.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.393 "dma_device_type": 2 00:07:59.393 } 00:07:59.393 ], 00:07:59.393 "driver_specific": { 00:07:59.393 "raid": { 00:07:59.393 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:07:59.393 "strip_size_kb": 0, 00:07:59.393 "state": "online", 00:07:59.393 "raid_level": "raid1", 00:07:59.393 "superblock": true, 00:07:59.393 "num_base_bdevs": 2, 00:07:59.393 "num_base_bdevs_discovered": 2, 00:07:59.393 "num_base_bdevs_operational": 2, 00:07:59.393 "base_bdevs_list": [ 00:07:59.393 { 00:07:59.393 "name": "pt1", 00:07:59.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.393 "is_configured": true, 00:07:59.393 "data_offset": 2048, 00:07:59.393 "data_size": 63488 00:07:59.393 }, 00:07:59.393 { 00:07:59.393 "name": "pt2", 00:07:59.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.393 "is_configured": true, 00:07:59.393 "data_offset": 2048, 00:07:59.393 "data_size": 63488 00:07:59.393 } 00:07:59.393 ] 00:07:59.393 } 00:07:59.393 } 00:07:59.393 }' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:59.393 pt2' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.393 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.653 [2024-10-09 01:27:58.318383] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a ']' 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.653 [2024-10-09 01:27:58.366160] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.653 [2024-10-09 01:27:58.366221] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.653 [2024-10-09 01:27:58.366325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.653 [2024-10-09 01:27:58.366406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.653 [2024-10-09 01:27:58.366441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.653 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.654 [2024-10-09 01:27:58.506225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:59.654 [2024-10-09 01:27:58.508336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:59.654 [2024-10-09 01:27:58.508454] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:59.654 [2024-10-09 01:27:58.508555] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:59.654 [2024-10-09 01:27:58.508599] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.654 [2024-10-09 01:27:58.508628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:59.654 request: 00:07:59.654 { 00:07:59.654 "name": "raid_bdev1", 00:07:59.654 "raid_level": "raid1", 00:07:59.654 "base_bdevs": [ 00:07:59.654 "malloc1", 00:07:59.654 "malloc2" 00:07:59.654 ], 00:07:59.654 "superblock": false, 00:07:59.654 "method": "bdev_raid_create", 00:07:59.654 "req_id": 1 00:07:59.654 } 00:07:59.654 Got JSON-RPC error response 00:07:59.654 response: 00:07:59.654 { 00:07:59.654 "code": -17, 00:07:59.654 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:59.654 } 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.654 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.914 [2024-10-09 01:27:58.574226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.914 [2024-10-09 01:27:58.574309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.914 [2024-10-09 01:27:58.574327] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:59.914 [2024-10-09 01:27:58.574341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.914 [2024-10-09 01:27:58.576700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.914 [2024-10-09 01:27:58.576735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.914 [2024-10-09 01:27:58.576797] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:59.914 [2024-10-09 01:27:58.576842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.914 pt1 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.914 "name": "raid_bdev1", 00:07:59.914 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:07:59.914 "strip_size_kb": 0, 00:07:59.914 "state": "configuring", 00:07:59.914 "raid_level": "raid1", 00:07:59.914 "superblock": true, 00:07:59.914 "num_base_bdevs": 2, 00:07:59.914 "num_base_bdevs_discovered": 1, 00:07:59.914 "num_base_bdevs_operational": 2, 00:07:59.914 "base_bdevs_list": [ 00:07:59.914 { 00:07:59.914 "name": "pt1", 00:07:59.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.914 "is_configured": true, 00:07:59.914 "data_offset": 2048, 00:07:59.914 "data_size": 63488 00:07:59.914 }, 00:07:59.914 { 00:07:59.914 "name": null, 00:07:59.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.914 "is_configured": false, 00:07:59.914 "data_offset": 2048, 00:07:59.914 "data_size": 63488 00:07:59.914 } 00:07:59.914 ] 00:07:59.914 }' 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.914 01:27:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.174 [2024-10-09 01:27:59.054391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:00.174 [2024-10-09 01:27:59.054554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.174 [2024-10-09 01:27:59.054594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:00.174 [2024-10-09 01:27:59.054625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.174 [2024-10-09 01:27:59.055125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.174 [2024-10-09 01:27:59.055185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:00.174 [2024-10-09 01:27:59.055294] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:00.174 [2024-10-09 01:27:59.055346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:00.174 [2024-10-09 01:27:59.055475] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.174 [2024-10-09 01:27:59.055518] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.174 [2024-10-09 01:27:59.055775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:00.174 [2024-10-09 01:27:59.055937] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.174 [2024-10-09 01:27:59.055977] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:00.174 [2024-10-09 01:27:59.056122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.174 pt2 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.174 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.434 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.434 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.435 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.435 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.435 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.435 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.435 "name": "raid_bdev1", 00:08:00.435 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:08:00.435 "strip_size_kb": 0, 00:08:00.435 "state": "online", 00:08:00.435 "raid_level": "raid1", 00:08:00.435 "superblock": true, 00:08:00.435 "num_base_bdevs": 2, 00:08:00.435 "num_base_bdevs_discovered": 2, 00:08:00.435 "num_base_bdevs_operational": 2, 00:08:00.435 "base_bdevs_list": [ 00:08:00.435 { 00:08:00.435 "name": "pt1", 00:08:00.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.435 "is_configured": true, 00:08:00.435 "data_offset": 2048, 00:08:00.435 "data_size": 63488 00:08:00.435 }, 00:08:00.435 { 00:08:00.435 "name": "pt2", 00:08:00.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.435 "is_configured": true, 00:08:00.435 "data_offset": 2048, 00:08:00.435 "data_size": 63488 00:08:00.435 } 00:08:00.435 ] 00:08:00.435 }' 00:08:00.435 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.435 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.695 [2024-10-09 01:27:59.502790] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:00.695 "name": "raid_bdev1", 00:08:00.695 "aliases": [ 00:08:00.695 "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a" 00:08:00.695 ], 00:08:00.695 "product_name": "Raid Volume", 00:08:00.695 "block_size": 512, 00:08:00.695 "num_blocks": 63488, 00:08:00.695 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:08:00.695 "assigned_rate_limits": { 00:08:00.695 "rw_ios_per_sec": 0, 00:08:00.695 "rw_mbytes_per_sec": 0, 00:08:00.695 "r_mbytes_per_sec": 0, 00:08:00.695 "w_mbytes_per_sec": 0 00:08:00.695 }, 00:08:00.695 "claimed": false, 00:08:00.695 "zoned": false, 00:08:00.695 "supported_io_types": { 00:08:00.695 "read": true, 00:08:00.695 "write": true, 00:08:00.695 "unmap": false, 00:08:00.695 "flush": false, 00:08:00.695 "reset": true, 00:08:00.695 "nvme_admin": false, 00:08:00.695 "nvme_io": false, 00:08:00.695 "nvme_io_md": false, 00:08:00.695 "write_zeroes": true, 00:08:00.695 "zcopy": false, 00:08:00.695 "get_zone_info": false, 00:08:00.695 "zone_management": false, 00:08:00.695 "zone_append": false, 00:08:00.695 "compare": false, 00:08:00.695 "compare_and_write": false, 00:08:00.695 "abort": false, 00:08:00.695 "seek_hole": false, 00:08:00.695 "seek_data": false, 00:08:00.695 "copy": false, 00:08:00.695 "nvme_iov_md": false 00:08:00.695 }, 00:08:00.695 "memory_domains": [ 00:08:00.695 { 00:08:00.695 "dma_device_id": "system", 00:08:00.695 "dma_device_type": 1 00:08:00.695 }, 00:08:00.695 { 00:08:00.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.695 "dma_device_type": 2 00:08:00.695 }, 00:08:00.695 { 00:08:00.695 "dma_device_id": "system", 00:08:00.695 "dma_device_type": 1 00:08:00.695 }, 00:08:00.695 { 00:08:00.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.695 "dma_device_type": 2 00:08:00.695 } 00:08:00.695 ], 00:08:00.695 "driver_specific": { 00:08:00.695 "raid": { 00:08:00.695 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:08:00.695 "strip_size_kb": 0, 00:08:00.695 "state": "online", 00:08:00.695 "raid_level": "raid1", 00:08:00.695 "superblock": true, 00:08:00.695 "num_base_bdevs": 2, 00:08:00.695 "num_base_bdevs_discovered": 2, 00:08:00.695 "num_base_bdevs_operational": 2, 00:08:00.695 "base_bdevs_list": [ 00:08:00.695 { 00:08:00.695 "name": "pt1", 00:08:00.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.695 "is_configured": true, 00:08:00.695 "data_offset": 2048, 00:08:00.695 "data_size": 63488 00:08:00.695 }, 00:08:00.695 { 00:08:00.695 "name": "pt2", 00:08:00.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.695 "is_configured": true, 00:08:00.695 "data_offset": 2048, 00:08:00.695 "data_size": 63488 00:08:00.695 } 00:08:00.695 ] 00:08:00.695 } 00:08:00.695 } 00:08:00.695 }' 00:08:00.695 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:00.955 pt2' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.955 [2024-10-09 01:27:59.746746] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a '!=' a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a ']' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.955 [2024-10-09 01:27:59.790572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.955 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.215 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.215 "name": "raid_bdev1", 00:08:01.215 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:08:01.215 "strip_size_kb": 0, 00:08:01.215 "state": "online", 00:08:01.215 "raid_level": "raid1", 00:08:01.215 "superblock": true, 00:08:01.215 "num_base_bdevs": 2, 00:08:01.215 "num_base_bdevs_discovered": 1, 00:08:01.215 "num_base_bdevs_operational": 1, 00:08:01.215 "base_bdevs_list": [ 00:08:01.215 { 00:08:01.215 "name": null, 00:08:01.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.215 "is_configured": false, 00:08:01.215 "data_offset": 0, 00:08:01.215 "data_size": 63488 00:08:01.215 }, 00:08:01.215 { 00:08:01.215 "name": "pt2", 00:08:01.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.215 "is_configured": true, 00:08:01.215 "data_offset": 2048, 00:08:01.215 "data_size": 63488 00:08:01.215 } 00:08:01.215 ] 00:08:01.215 }' 00:08:01.215 01:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.215 01:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 [2024-10-09 01:28:00.262668] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.475 [2024-10-09 01:28:00.262743] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.475 [2024-10-09 01:28:00.262842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.475 [2024-10-09 01:28:00.262906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.475 [2024-10-09 01:28:00.262941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.475 [2024-10-09 01:28:00.338684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:01.475 [2024-10-09 01:28:00.338789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.475 [2024-10-09 01:28:00.338823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:01.475 [2024-10-09 01:28:00.338852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.475 [2024-10-09 01:28:00.341371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.475 [2024-10-09 01:28:00.341450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:01.475 [2024-10-09 01:28:00.341571] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:01.475 [2024-10-09 01:28:00.341630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.475 [2024-10-09 01:28:00.341746] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:01.475 [2024-10-09 01:28:00.341783] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.475 [2024-10-09 01:28:00.342031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:01.475 [2024-10-09 01:28:00.342192] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:01.475 [2024-10-09 01:28:00.342232] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:01.475 [2024-10-09 01:28:00.342384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.475 pt2 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.475 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.735 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.735 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.735 "name": "raid_bdev1", 00:08:01.735 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:08:01.735 "strip_size_kb": 0, 00:08:01.735 "state": "online", 00:08:01.735 "raid_level": "raid1", 00:08:01.735 "superblock": true, 00:08:01.735 "num_base_bdevs": 2, 00:08:01.735 "num_base_bdevs_discovered": 1, 00:08:01.735 "num_base_bdevs_operational": 1, 00:08:01.735 "base_bdevs_list": [ 00:08:01.735 { 00:08:01.735 "name": null, 00:08:01.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.735 "is_configured": false, 00:08:01.735 "data_offset": 2048, 00:08:01.735 "data_size": 63488 00:08:01.735 }, 00:08:01.735 { 00:08:01.735 "name": "pt2", 00:08:01.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.736 "is_configured": true, 00:08:01.736 "data_offset": 2048, 00:08:01.736 "data_size": 63488 00:08:01.736 } 00:08:01.736 ] 00:08:01.736 }' 00:08:01.736 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.736 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 [2024-10-09 01:28:00.782852] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.996 [2024-10-09 01:28:00.782946] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.996 [2024-10-09 01:28:00.783055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.996 [2024-10-09 01:28:00.783127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.996 [2024-10-09 01:28:00.783158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 [2024-10-09 01:28:00.846820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.996 [2024-10-09 01:28:00.846883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.996 [2024-10-09 01:28:00.846913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:01.996 [2024-10-09 01:28:00.846924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.996 [2024-10-09 01:28:00.849420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.996 [2024-10-09 01:28:00.849455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.996 [2024-10-09 01:28:00.849569] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:01.996 [2024-10-09 01:28:00.849615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.996 [2024-10-09 01:28:00.849752] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:01.996 [2024-10-09 01:28:00.849765] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.996 [2024-10-09 01:28:00.849784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:01.996 [2024-10-09 01:28:00.849821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.996 [2024-10-09 01:28:00.849900] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:01.996 [2024-10-09 01:28:00.849909] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.996 [2024-10-09 01:28:00.850139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:01.996 [2024-10-09 01:28:00.850264] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:01.996 [2024-10-09 01:28:00.850277] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:01.996 [2024-10-09 01:28:00.850389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.996 pt1 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.256 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.256 "name": "raid_bdev1", 00:08:02.256 "uuid": "a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a", 00:08:02.256 "strip_size_kb": 0, 00:08:02.256 "state": "online", 00:08:02.256 "raid_level": "raid1", 00:08:02.256 "superblock": true, 00:08:02.256 "num_base_bdevs": 2, 00:08:02.256 "num_base_bdevs_discovered": 1, 00:08:02.256 "num_base_bdevs_operational": 1, 00:08:02.256 "base_bdevs_list": [ 00:08:02.256 { 00:08:02.256 "name": null, 00:08:02.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.256 "is_configured": false, 00:08:02.256 "data_offset": 2048, 00:08:02.256 "data_size": 63488 00:08:02.256 }, 00:08:02.256 { 00:08:02.256 "name": "pt2", 00:08:02.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.256 "is_configured": true, 00:08:02.256 "data_offset": 2048, 00:08:02.256 "data_size": 63488 00:08:02.256 } 00:08:02.256 ] 00:08:02.256 }' 00:08:02.256 01:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.256 01:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.515 [2024-10-09 01:28:01.363197] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a '!=' a8193aa7-d63a-49d4-94a7-ecf8e2d3dc0a ']' 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75557 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75557 ']' 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75557 00:08:02.515 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:02.774 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.774 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75557 00:08:02.774 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.774 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.774 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75557' 00:08:02.774 killing process with pid 75557 00:08:02.774 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 75557 00:08:02.774 [2024-10-09 01:28:01.447395] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.775 [2024-10-09 01:28:01.447494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.775 [2024-10-09 01:28:01.447560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.775 [2024-10-09 01:28:01.447574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:02.775 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 75557 00:08:02.775 [2024-10-09 01:28:01.490268] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.034 01:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:03.034 00:08:03.034 real 0m5.254s 00:08:03.034 user 0m8.400s 00:08:03.034 sys 0m1.173s 00:08:03.034 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.034 ************************************ 00:08:03.034 END TEST raid_superblock_test 00:08:03.034 ************************************ 00:08:03.034 01:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.034 01:28:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:03.034 01:28:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:03.034 01:28:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.034 01:28:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.295 ************************************ 00:08:03.295 START TEST raid_read_error_test 00:08:03.295 ************************************ 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0uu7gZHPXx 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75877 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75877 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75877 ']' 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.295 01:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.295 [2024-10-09 01:28:02.047158] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:03.295 [2024-10-09 01:28:02.047379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75877 ] 00:08:03.295 [2024-10-09 01:28:02.184592] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.556 [2024-10-09 01:28:02.211640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.556 [2024-10-09 01:28:02.289006] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.556 [2024-10-09 01:28:02.366255] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.556 [2024-10-09 01:28:02.366369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.126 BaseBdev1_malloc 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.126 true 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.126 [2024-10-09 01:28:02.921721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:04.126 [2024-10-09 01:28:02.921812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.126 [2024-10-09 01:28:02.921830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:04.126 [2024-10-09 01:28:02.921853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.126 [2024-10-09 01:28:02.924146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.126 [2024-10-09 01:28:02.924254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:04.126 BaseBdev1 00:08:04.126 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.127 BaseBdev2_malloc 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.127 true 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.127 [2024-10-09 01:28:02.978633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:04.127 [2024-10-09 01:28:02.978683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.127 [2024-10-09 01:28:02.978698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:04.127 [2024-10-09 01:28:02.978709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.127 [2024-10-09 01:28:02.980938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.127 [2024-10-09 01:28:02.981048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:04.127 BaseBdev2 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.127 [2024-10-09 01:28:02.990676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.127 [2024-10-09 01:28:02.992809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.127 [2024-10-09 01:28:02.992989] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.127 [2024-10-09 01:28:02.993004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.127 [2024-10-09 01:28:02.993269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:04.127 [2024-10-09 01:28:02.993417] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.127 [2024-10-09 01:28:02.993427] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.127 [2024-10-09 01:28:02.993577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.127 01:28:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.127 01:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.127 01:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.387 01:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.387 01:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.387 "name": "raid_bdev1", 00:08:04.387 "uuid": "b1921ce5-64f1-43cc-9962-8abf0d1b9bb5", 00:08:04.387 "strip_size_kb": 0, 00:08:04.387 "state": "online", 00:08:04.387 "raid_level": "raid1", 00:08:04.387 "superblock": true, 00:08:04.387 "num_base_bdevs": 2, 00:08:04.387 "num_base_bdevs_discovered": 2, 00:08:04.387 "num_base_bdevs_operational": 2, 00:08:04.387 "base_bdevs_list": [ 00:08:04.387 { 00:08:04.387 "name": "BaseBdev1", 00:08:04.387 "uuid": "26980080-fded-5335-8d45-feb8dd44d47a", 00:08:04.387 "is_configured": true, 00:08:04.387 "data_offset": 2048, 00:08:04.387 "data_size": 63488 00:08:04.387 }, 00:08:04.387 { 00:08:04.387 "name": "BaseBdev2", 00:08:04.387 "uuid": "ad79ed53-d201-5c13-855e-67b9312520a3", 00:08:04.387 "is_configured": true, 00:08:04.387 "data_offset": 2048, 00:08:04.387 "data_size": 63488 00:08:04.387 } 00:08:04.387 ] 00:08:04.387 }' 00:08:04.387 01:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.387 01:28:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.647 01:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:04.647 01:28:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:04.907 [2024-10-09 01:28:03.547268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.848 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.848 "name": "raid_bdev1", 00:08:05.848 "uuid": "b1921ce5-64f1-43cc-9962-8abf0d1b9bb5", 00:08:05.848 "strip_size_kb": 0, 00:08:05.848 "state": "online", 00:08:05.848 "raid_level": "raid1", 00:08:05.848 "superblock": true, 00:08:05.848 "num_base_bdevs": 2, 00:08:05.848 "num_base_bdevs_discovered": 2, 00:08:05.848 "num_base_bdevs_operational": 2, 00:08:05.848 "base_bdevs_list": [ 00:08:05.848 { 00:08:05.848 "name": "BaseBdev1", 00:08:05.848 "uuid": "26980080-fded-5335-8d45-feb8dd44d47a", 00:08:05.848 "is_configured": true, 00:08:05.848 "data_offset": 2048, 00:08:05.848 "data_size": 63488 00:08:05.848 }, 00:08:05.848 { 00:08:05.848 "name": "BaseBdev2", 00:08:05.848 "uuid": "ad79ed53-d201-5c13-855e-67b9312520a3", 00:08:05.848 "is_configured": true, 00:08:05.848 "data_offset": 2048, 00:08:05.848 "data_size": 63488 00:08:05.848 } 00:08:05.848 ] 00:08:05.848 }' 00:08:05.849 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.849 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.109 [2024-10-09 01:28:04.939593] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.109 [2024-10-09 01:28:04.939643] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.109 [2024-10-09 01:28:04.942087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.109 [2024-10-09 01:28:04.942145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.109 [2024-10-09 01:28:04.942231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.109 [2024-10-09 01:28:04.942243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:06.109 { 00:08:06.109 "results": [ 00:08:06.109 { 00:08:06.109 "job": "raid_bdev1", 00:08:06.109 "core_mask": "0x1", 00:08:06.109 "workload": "randrw", 00:08:06.109 "percentage": 50, 00:08:06.109 "status": "finished", 00:08:06.109 "queue_depth": 1, 00:08:06.109 "io_size": 131072, 00:08:06.109 "runtime": 1.390075, 00:08:06.109 "iops": 15773.969030447997, 00:08:06.109 "mibps": 1971.7461288059997, 00:08:06.109 "io_failed": 0, 00:08:06.109 "io_timeout": 0, 00:08:06.109 "avg_latency_us": 60.959990173013374, 00:08:06.109 "min_latency_us": 21.75542578227142, 00:08:06.109 "max_latency_us": 1378.0667654493159 00:08:06.109 } 00:08:06.109 ], 00:08:06.109 "core_count": 1 00:08:06.109 } 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75877 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75877 ']' 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75877 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75877 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75877' 00:08:06.109 killing process with pid 75877 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75877 00:08:06.109 [2024-10-09 01:28:04.991032] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.109 01:28:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75877 00:08:06.370 [2024-10-09 01:28:05.021381] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0uu7gZHPXx 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:06.630 ************************************ 00:08:06.630 END TEST raid_read_error_test 00:08:06.630 ************************************ 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:06.630 00:08:06.630 real 0m3.461s 00:08:06.630 user 0m4.279s 00:08:06.630 sys 0m0.632s 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.630 01:28:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.630 01:28:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:06.630 01:28:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:06.630 01:28:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.630 01:28:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.630 ************************************ 00:08:06.630 START TEST raid_write_error_test 00:08:06.630 ************************************ 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oi8JRciSwc 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76006 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76006 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76006 ']' 00:08:06.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:06.631 01:28:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.891 [2024-10-09 01:28:05.579144] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:06.891 [2024-10-09 01:28:05.579269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76006 ] 00:08:06.891 [2024-10-09 01:28:05.710314] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.891 [2024-10-09 01:28:05.738211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.151 [2024-10-09 01:28:05.810674] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.151 [2024-10-09 01:28:05.886539] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.151 [2024-10-09 01:28:05.886579] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.721 BaseBdev1_malloc 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.721 true 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.721 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.721 [2024-10-09 01:28:06.458153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:07.721 [2024-10-09 01:28:06.458310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.721 [2024-10-09 01:28:06.458334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:07.721 [2024-10-09 01:28:06.458348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.721 [2024-10-09 01:28:06.460764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.721 [2024-10-09 01:28:06.460801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:07.721 BaseBdev1 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.722 BaseBdev2_malloc 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.722 true 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.722 [2024-10-09 01:28:06.515824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:07.722 [2024-10-09 01:28:06.515874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.722 [2024-10-09 01:28:06.515890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.722 [2024-10-09 01:28:06.515900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.722 [2024-10-09 01:28:06.518168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.722 [2024-10-09 01:28:06.518280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.722 BaseBdev2 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.722 [2024-10-09 01:28:06.527858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.722 [2024-10-09 01:28:06.530048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.722 [2024-10-09 01:28:06.530255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.722 [2024-10-09 01:28:06.530308] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:07.722 [2024-10-09 01:28:06.530586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:07.722 [2024-10-09 01:28:06.530771] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.722 [2024-10-09 01:28:06.530816] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:07.722 [2024-10-09 01:28:06.530982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.722 "name": "raid_bdev1", 00:08:07.722 "uuid": "085ec306-6822-4d37-847e-cd6fd7e4f64c", 00:08:07.722 "strip_size_kb": 0, 00:08:07.722 "state": "online", 00:08:07.722 "raid_level": "raid1", 00:08:07.722 "superblock": true, 00:08:07.722 "num_base_bdevs": 2, 00:08:07.722 "num_base_bdevs_discovered": 2, 00:08:07.722 "num_base_bdevs_operational": 2, 00:08:07.722 "base_bdevs_list": [ 00:08:07.722 { 00:08:07.722 "name": "BaseBdev1", 00:08:07.722 "uuid": "d382e9b9-a01f-55f0-83db-7b502d140e95", 00:08:07.722 "is_configured": true, 00:08:07.722 "data_offset": 2048, 00:08:07.722 "data_size": 63488 00:08:07.722 }, 00:08:07.722 { 00:08:07.722 "name": "BaseBdev2", 00:08:07.722 "uuid": "28c1c5a6-2629-5081-be76-995e56c321ca", 00:08:07.722 "is_configured": true, 00:08:07.722 "data_offset": 2048, 00:08:07.722 "data_size": 63488 00:08:07.722 } 00:08:07.722 ] 00:08:07.722 }' 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.722 01:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.292 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:08.292 01:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:08.292 [2024-10-09 01:28:07.064491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.233 [2024-10-09 01:28:07.985516] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:09.233 [2024-10-09 01:28:07.985599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.233 [2024-10-09 01:28:07.985850] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.233 01:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.233 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.233 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.233 "name": "raid_bdev1", 00:08:09.233 "uuid": "085ec306-6822-4d37-847e-cd6fd7e4f64c", 00:08:09.233 "strip_size_kb": 0, 00:08:09.233 "state": "online", 00:08:09.233 "raid_level": "raid1", 00:08:09.233 "superblock": true, 00:08:09.233 "num_base_bdevs": 2, 00:08:09.233 "num_base_bdevs_discovered": 1, 00:08:09.234 "num_base_bdevs_operational": 1, 00:08:09.234 "base_bdevs_list": [ 00:08:09.234 { 00:08:09.234 "name": null, 00:08:09.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.234 "is_configured": false, 00:08:09.234 "data_offset": 0, 00:08:09.234 "data_size": 63488 00:08:09.234 }, 00:08:09.234 { 00:08:09.234 "name": "BaseBdev2", 00:08:09.234 "uuid": "28c1c5a6-2629-5081-be76-995e56c321ca", 00:08:09.234 "is_configured": true, 00:08:09.234 "data_offset": 2048, 00:08:09.234 "data_size": 63488 00:08:09.234 } 00:08:09.234 ] 00:08:09.234 }' 00:08:09.234 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.234 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.805 [2024-10-09 01:28:08.431677] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.805 [2024-10-09 01:28:08.431795] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.805 [2024-10-09 01:28:08.434229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.805 [2024-10-09 01:28:08.434313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.805 [2024-10-09 01:28:08.434391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.805 [2024-10-09 01:28:08.434436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.805 { 00:08:09.805 "results": [ 00:08:09.805 { 00:08:09.805 "job": "raid_bdev1", 00:08:09.805 "core_mask": "0x1", 00:08:09.805 "workload": "randrw", 00:08:09.805 "percentage": 50, 00:08:09.805 "status": "finished", 00:08:09.805 "queue_depth": 1, 00:08:09.805 "io_size": 131072, 00:08:09.805 "runtime": 1.365072, 00:08:09.805 "iops": 19150.63820809452, 00:08:09.805 "mibps": 2393.829776011815, 00:08:09.805 "io_failed": 0, 00:08:09.805 "io_timeout": 0, 00:08:09.805 "avg_latency_us": 49.777475908934505, 00:08:09.805 "min_latency_us": 21.309160638019698, 00:08:09.805 "max_latency_us": 1328.085069293123 00:08:09.805 } 00:08:09.805 ], 00:08:09.805 "core_count": 1 00:08:09.805 } 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76006 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76006 ']' 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76006 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76006 00:08:09.805 killing process with pid 76006 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76006' 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76006 00:08:09.805 [2024-10-09 01:28:08.481270] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.805 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76006 00:08:09.805 [2024-10-09 01:28:08.509589] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oi8JRciSwc 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.066 ************************************ 00:08:10.066 END TEST raid_write_error_test 00:08:10.066 ************************************ 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:10.066 00:08:10.066 real 0m3.413s 00:08:10.066 user 0m4.214s 00:08:10.066 sys 0m0.592s 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.066 01:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.066 01:28:08 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:10.066 01:28:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:10.066 01:28:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:10.066 01:28:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:10.066 01:28:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.066 01:28:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.327 ************************************ 00:08:10.327 START TEST raid_state_function_test 00:08:10.327 ************************************ 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76144 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76144' 00:08:10.327 Process raid pid: 76144 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76144 00:08:10.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76144 ']' 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.327 01:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.327 [2024-10-09 01:28:09.071428] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:10.327 [2024-10-09 01:28:09.071590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.327 [2024-10-09 01:28:09.209353] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.588 [2024-10-09 01:28:09.236672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.588 [2024-10-09 01:28:09.304995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.588 [2024-10-09 01:28:09.381568] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.588 [2024-10-09 01:28:09.381610] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.158 [2024-10-09 01:28:09.875934] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.158 [2024-10-09 01:28:09.875993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.158 [2024-10-09 01:28:09.876009] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.158 [2024-10-09 01:28:09.876016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.158 [2024-10-09 01:28:09.876028] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.158 [2024-10-09 01:28:09.876035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.158 "name": "Existed_Raid", 00:08:11.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.158 "strip_size_kb": 64, 00:08:11.158 "state": "configuring", 00:08:11.158 "raid_level": "raid0", 00:08:11.158 "superblock": false, 00:08:11.158 "num_base_bdevs": 3, 00:08:11.158 "num_base_bdevs_discovered": 0, 00:08:11.158 "num_base_bdevs_operational": 3, 00:08:11.158 "base_bdevs_list": [ 00:08:11.158 { 00:08:11.158 "name": "BaseBdev1", 00:08:11.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.158 "is_configured": false, 00:08:11.158 "data_offset": 0, 00:08:11.158 "data_size": 0 00:08:11.158 }, 00:08:11.158 { 00:08:11.158 "name": "BaseBdev2", 00:08:11.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.158 "is_configured": false, 00:08:11.158 "data_offset": 0, 00:08:11.158 "data_size": 0 00:08:11.158 }, 00:08:11.158 { 00:08:11.158 "name": "BaseBdev3", 00:08:11.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.158 "is_configured": false, 00:08:11.158 "data_offset": 0, 00:08:11.158 "data_size": 0 00:08:11.158 } 00:08:11.158 ] 00:08:11.158 }' 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.158 01:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.419 [2024-10-09 01:28:10.271922] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.419 [2024-10-09 01:28:10.272014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.419 [2024-10-09 01:28:10.283931] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.419 [2024-10-09 01:28:10.283969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.419 [2024-10-09 01:28:10.283980] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.419 [2024-10-09 01:28:10.283987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.419 [2024-10-09 01:28:10.283995] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.419 [2024-10-09 01:28:10.284002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.419 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.679 [2024-10-09 01:28:10.311244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.679 BaseBdev1 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.679 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.679 [ 00:08:11.679 { 00:08:11.679 "name": "BaseBdev1", 00:08:11.679 "aliases": [ 00:08:11.679 "8b301b3c-0456-4c59-8d21-6259be49744a" 00:08:11.679 ], 00:08:11.679 "product_name": "Malloc disk", 00:08:11.679 "block_size": 512, 00:08:11.679 "num_blocks": 65536, 00:08:11.679 "uuid": "8b301b3c-0456-4c59-8d21-6259be49744a", 00:08:11.679 "assigned_rate_limits": { 00:08:11.679 "rw_ios_per_sec": 0, 00:08:11.679 "rw_mbytes_per_sec": 0, 00:08:11.679 "r_mbytes_per_sec": 0, 00:08:11.679 "w_mbytes_per_sec": 0 00:08:11.679 }, 00:08:11.679 "claimed": true, 00:08:11.679 "claim_type": "exclusive_write", 00:08:11.679 "zoned": false, 00:08:11.679 "supported_io_types": { 00:08:11.679 "read": true, 00:08:11.679 "write": true, 00:08:11.679 "unmap": true, 00:08:11.679 "flush": true, 00:08:11.679 "reset": true, 00:08:11.679 "nvme_admin": false, 00:08:11.679 "nvme_io": false, 00:08:11.680 "nvme_io_md": false, 00:08:11.680 "write_zeroes": true, 00:08:11.680 "zcopy": true, 00:08:11.680 "get_zone_info": false, 00:08:11.680 "zone_management": false, 00:08:11.680 "zone_append": false, 00:08:11.680 "compare": false, 00:08:11.680 "compare_and_write": false, 00:08:11.680 "abort": true, 00:08:11.680 "seek_hole": false, 00:08:11.680 "seek_data": false, 00:08:11.680 "copy": true, 00:08:11.680 "nvme_iov_md": false 00:08:11.680 }, 00:08:11.680 "memory_domains": [ 00:08:11.680 { 00:08:11.680 "dma_device_id": "system", 00:08:11.680 "dma_device_type": 1 00:08:11.680 }, 00:08:11.680 { 00:08:11.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.680 "dma_device_type": 2 00:08:11.680 } 00:08:11.680 ], 00:08:11.680 "driver_specific": {} 00:08:11.680 } 00:08:11.680 ] 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.680 "name": "Existed_Raid", 00:08:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.680 "strip_size_kb": 64, 00:08:11.680 "state": "configuring", 00:08:11.680 "raid_level": "raid0", 00:08:11.680 "superblock": false, 00:08:11.680 "num_base_bdevs": 3, 00:08:11.680 "num_base_bdevs_discovered": 1, 00:08:11.680 "num_base_bdevs_operational": 3, 00:08:11.680 "base_bdevs_list": [ 00:08:11.680 { 00:08:11.680 "name": "BaseBdev1", 00:08:11.680 "uuid": "8b301b3c-0456-4c59-8d21-6259be49744a", 00:08:11.680 "is_configured": true, 00:08:11.680 "data_offset": 0, 00:08:11.680 "data_size": 65536 00:08:11.680 }, 00:08:11.680 { 00:08:11.680 "name": "BaseBdev2", 00:08:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.680 "is_configured": false, 00:08:11.680 "data_offset": 0, 00:08:11.680 "data_size": 0 00:08:11.680 }, 00:08:11.680 { 00:08:11.680 "name": "BaseBdev3", 00:08:11.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.680 "is_configured": false, 00:08:11.680 "data_offset": 0, 00:08:11.680 "data_size": 0 00:08:11.680 } 00:08:11.680 ] 00:08:11.680 }' 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.680 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 [2024-10-09 01:28:10.811479] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.940 [2024-10-09 01:28:10.811584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 [2024-10-09 01:28:10.823463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.940 [2024-10-09 01:28:10.825751] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.940 [2024-10-09 01:28:10.825790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.940 [2024-10-09 01:28:10.825802] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.940 [2024-10-09 01:28:10.825810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:11.940 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.941 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.941 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.941 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.941 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.941 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.941 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.941 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.941 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.201 "name": "Existed_Raid", 00:08:12.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.201 "strip_size_kb": 64, 00:08:12.201 "state": "configuring", 00:08:12.201 "raid_level": "raid0", 00:08:12.201 "superblock": false, 00:08:12.201 "num_base_bdevs": 3, 00:08:12.201 "num_base_bdevs_discovered": 1, 00:08:12.201 "num_base_bdevs_operational": 3, 00:08:12.201 "base_bdevs_list": [ 00:08:12.201 { 00:08:12.201 "name": "BaseBdev1", 00:08:12.201 "uuid": "8b301b3c-0456-4c59-8d21-6259be49744a", 00:08:12.201 "is_configured": true, 00:08:12.201 "data_offset": 0, 00:08:12.201 "data_size": 65536 00:08:12.201 }, 00:08:12.201 { 00:08:12.201 "name": "BaseBdev2", 00:08:12.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.201 "is_configured": false, 00:08:12.201 "data_offset": 0, 00:08:12.201 "data_size": 0 00:08:12.201 }, 00:08:12.201 { 00:08:12.201 "name": "BaseBdev3", 00:08:12.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.201 "is_configured": false, 00:08:12.201 "data_offset": 0, 00:08:12.201 "data_size": 0 00:08:12.201 } 00:08:12.201 ] 00:08:12.201 }' 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.201 01:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.462 [2024-10-09 01:28:11.267166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.462 BaseBdev2 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.462 [ 00:08:12.462 { 00:08:12.462 "name": "BaseBdev2", 00:08:12.462 "aliases": [ 00:08:12.462 "e5fe3898-cd4a-4d7b-8403-ae95d0dabd1d" 00:08:12.462 ], 00:08:12.462 "product_name": "Malloc disk", 00:08:12.462 "block_size": 512, 00:08:12.462 "num_blocks": 65536, 00:08:12.462 "uuid": "e5fe3898-cd4a-4d7b-8403-ae95d0dabd1d", 00:08:12.462 "assigned_rate_limits": { 00:08:12.462 "rw_ios_per_sec": 0, 00:08:12.462 "rw_mbytes_per_sec": 0, 00:08:12.462 "r_mbytes_per_sec": 0, 00:08:12.462 "w_mbytes_per_sec": 0 00:08:12.462 }, 00:08:12.462 "claimed": true, 00:08:12.462 "claim_type": "exclusive_write", 00:08:12.462 "zoned": false, 00:08:12.462 "supported_io_types": { 00:08:12.462 "read": true, 00:08:12.462 "write": true, 00:08:12.462 "unmap": true, 00:08:12.462 "flush": true, 00:08:12.462 "reset": true, 00:08:12.462 "nvme_admin": false, 00:08:12.462 "nvme_io": false, 00:08:12.462 "nvme_io_md": false, 00:08:12.462 "write_zeroes": true, 00:08:12.462 "zcopy": true, 00:08:12.462 "get_zone_info": false, 00:08:12.462 "zone_management": false, 00:08:12.462 "zone_append": false, 00:08:12.462 "compare": false, 00:08:12.462 "compare_and_write": false, 00:08:12.462 "abort": true, 00:08:12.462 "seek_hole": false, 00:08:12.462 "seek_data": false, 00:08:12.462 "copy": true, 00:08:12.462 "nvme_iov_md": false 00:08:12.462 }, 00:08:12.462 "memory_domains": [ 00:08:12.462 { 00:08:12.462 "dma_device_id": "system", 00:08:12.462 "dma_device_type": 1 00:08:12.462 }, 00:08:12.462 { 00:08:12.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.462 "dma_device_type": 2 00:08:12.462 } 00:08:12.462 ], 00:08:12.462 "driver_specific": {} 00:08:12.462 } 00:08:12.462 ] 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.462 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.722 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.723 "name": "Existed_Raid", 00:08:12.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.723 "strip_size_kb": 64, 00:08:12.723 "state": "configuring", 00:08:12.723 "raid_level": "raid0", 00:08:12.723 "superblock": false, 00:08:12.723 "num_base_bdevs": 3, 00:08:12.723 "num_base_bdevs_discovered": 2, 00:08:12.723 "num_base_bdevs_operational": 3, 00:08:12.723 "base_bdevs_list": [ 00:08:12.723 { 00:08:12.723 "name": "BaseBdev1", 00:08:12.723 "uuid": "8b301b3c-0456-4c59-8d21-6259be49744a", 00:08:12.723 "is_configured": true, 00:08:12.723 "data_offset": 0, 00:08:12.723 "data_size": 65536 00:08:12.723 }, 00:08:12.723 { 00:08:12.723 "name": "BaseBdev2", 00:08:12.723 "uuid": "e5fe3898-cd4a-4d7b-8403-ae95d0dabd1d", 00:08:12.723 "is_configured": true, 00:08:12.723 "data_offset": 0, 00:08:12.723 "data_size": 65536 00:08:12.723 }, 00:08:12.723 { 00:08:12.723 "name": "BaseBdev3", 00:08:12.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.723 "is_configured": false, 00:08:12.723 "data_offset": 0, 00:08:12.723 "data_size": 0 00:08:12.723 } 00:08:12.723 ] 00:08:12.723 }' 00:08:12.723 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.723 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.983 [2024-10-09 01:28:11.752176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.983 [2024-10-09 01:28:11.752229] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.983 [2024-10-09 01:28:11.752238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:12.983 [2024-10-09 01:28:11.752597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:12.983 [2024-10-09 01:28:11.752777] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.983 [2024-10-09 01:28:11.752791] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:12.983 [2024-10-09 01:28:11.753006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.983 BaseBdev3 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.983 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.983 [ 00:08:12.983 { 00:08:12.983 "name": "BaseBdev3", 00:08:12.983 "aliases": [ 00:08:12.983 "e4722746-f9ca-40f5-84b1-5be15357f375" 00:08:12.983 ], 00:08:12.983 "product_name": "Malloc disk", 00:08:12.983 "block_size": 512, 00:08:12.983 "num_blocks": 65536, 00:08:12.983 "uuid": "e4722746-f9ca-40f5-84b1-5be15357f375", 00:08:12.983 "assigned_rate_limits": { 00:08:12.983 "rw_ios_per_sec": 0, 00:08:12.983 "rw_mbytes_per_sec": 0, 00:08:12.983 "r_mbytes_per_sec": 0, 00:08:12.983 "w_mbytes_per_sec": 0 00:08:12.983 }, 00:08:12.983 "claimed": true, 00:08:12.983 "claim_type": "exclusive_write", 00:08:12.983 "zoned": false, 00:08:12.983 "supported_io_types": { 00:08:12.983 "read": true, 00:08:12.983 "write": true, 00:08:12.983 "unmap": true, 00:08:12.983 "flush": true, 00:08:12.983 "reset": true, 00:08:12.983 "nvme_admin": false, 00:08:12.983 "nvme_io": false, 00:08:12.983 "nvme_io_md": false, 00:08:12.983 "write_zeroes": true, 00:08:12.983 "zcopy": true, 00:08:12.983 "get_zone_info": false, 00:08:12.983 "zone_management": false, 00:08:12.983 "zone_append": false, 00:08:12.983 "compare": false, 00:08:12.983 "compare_and_write": false, 00:08:12.983 "abort": true, 00:08:12.983 "seek_hole": false, 00:08:12.983 "seek_data": false, 00:08:12.983 "copy": true, 00:08:12.983 "nvme_iov_md": false 00:08:12.983 }, 00:08:12.983 "memory_domains": [ 00:08:12.983 { 00:08:12.983 "dma_device_id": "system", 00:08:12.983 "dma_device_type": 1 00:08:12.983 }, 00:08:12.983 { 00:08:12.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.983 "dma_device_type": 2 00:08:12.983 } 00:08:12.984 ], 00:08:12.984 "driver_specific": {} 00:08:12.984 } 00:08:12.984 ] 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.984 "name": "Existed_Raid", 00:08:12.984 "uuid": "1dba2d3d-755b-41a5-a96e-8ec6724e7de2", 00:08:12.984 "strip_size_kb": 64, 00:08:12.984 "state": "online", 00:08:12.984 "raid_level": "raid0", 00:08:12.984 "superblock": false, 00:08:12.984 "num_base_bdevs": 3, 00:08:12.984 "num_base_bdevs_discovered": 3, 00:08:12.984 "num_base_bdevs_operational": 3, 00:08:12.984 "base_bdevs_list": [ 00:08:12.984 { 00:08:12.984 "name": "BaseBdev1", 00:08:12.984 "uuid": "8b301b3c-0456-4c59-8d21-6259be49744a", 00:08:12.984 "is_configured": true, 00:08:12.984 "data_offset": 0, 00:08:12.984 "data_size": 65536 00:08:12.984 }, 00:08:12.984 { 00:08:12.984 "name": "BaseBdev2", 00:08:12.984 "uuid": "e5fe3898-cd4a-4d7b-8403-ae95d0dabd1d", 00:08:12.984 "is_configured": true, 00:08:12.984 "data_offset": 0, 00:08:12.984 "data_size": 65536 00:08:12.984 }, 00:08:12.984 { 00:08:12.984 "name": "BaseBdev3", 00:08:12.984 "uuid": "e4722746-f9ca-40f5-84b1-5be15357f375", 00:08:12.984 "is_configured": true, 00:08:12.984 "data_offset": 0, 00:08:12.984 "data_size": 65536 00:08:12.984 } 00:08:12.984 ] 00:08:12.984 }' 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.984 01:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.555 [2024-10-09 01:28:12.228671] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.555 "name": "Existed_Raid", 00:08:13.555 "aliases": [ 00:08:13.555 "1dba2d3d-755b-41a5-a96e-8ec6724e7de2" 00:08:13.555 ], 00:08:13.555 "product_name": "Raid Volume", 00:08:13.555 "block_size": 512, 00:08:13.555 "num_blocks": 196608, 00:08:13.555 "uuid": "1dba2d3d-755b-41a5-a96e-8ec6724e7de2", 00:08:13.555 "assigned_rate_limits": { 00:08:13.555 "rw_ios_per_sec": 0, 00:08:13.555 "rw_mbytes_per_sec": 0, 00:08:13.555 "r_mbytes_per_sec": 0, 00:08:13.555 "w_mbytes_per_sec": 0 00:08:13.555 }, 00:08:13.555 "claimed": false, 00:08:13.555 "zoned": false, 00:08:13.555 "supported_io_types": { 00:08:13.555 "read": true, 00:08:13.555 "write": true, 00:08:13.555 "unmap": true, 00:08:13.555 "flush": true, 00:08:13.555 "reset": true, 00:08:13.555 "nvme_admin": false, 00:08:13.555 "nvme_io": false, 00:08:13.555 "nvme_io_md": false, 00:08:13.555 "write_zeroes": true, 00:08:13.555 "zcopy": false, 00:08:13.555 "get_zone_info": false, 00:08:13.555 "zone_management": false, 00:08:13.555 "zone_append": false, 00:08:13.555 "compare": false, 00:08:13.555 "compare_and_write": false, 00:08:13.555 "abort": false, 00:08:13.555 "seek_hole": false, 00:08:13.555 "seek_data": false, 00:08:13.555 "copy": false, 00:08:13.555 "nvme_iov_md": false 00:08:13.555 }, 00:08:13.555 "memory_domains": [ 00:08:13.555 { 00:08:13.555 "dma_device_id": "system", 00:08:13.555 "dma_device_type": 1 00:08:13.555 }, 00:08:13.555 { 00:08:13.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.555 "dma_device_type": 2 00:08:13.555 }, 00:08:13.555 { 00:08:13.555 "dma_device_id": "system", 00:08:13.555 "dma_device_type": 1 00:08:13.555 }, 00:08:13.555 { 00:08:13.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.555 "dma_device_type": 2 00:08:13.555 }, 00:08:13.555 { 00:08:13.555 "dma_device_id": "system", 00:08:13.555 "dma_device_type": 1 00:08:13.555 }, 00:08:13.555 { 00:08:13.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.555 "dma_device_type": 2 00:08:13.555 } 00:08:13.555 ], 00:08:13.555 "driver_specific": { 00:08:13.555 "raid": { 00:08:13.555 "uuid": "1dba2d3d-755b-41a5-a96e-8ec6724e7de2", 00:08:13.555 "strip_size_kb": 64, 00:08:13.555 "state": "online", 00:08:13.555 "raid_level": "raid0", 00:08:13.555 "superblock": false, 00:08:13.555 "num_base_bdevs": 3, 00:08:13.555 "num_base_bdevs_discovered": 3, 00:08:13.555 "num_base_bdevs_operational": 3, 00:08:13.555 "base_bdevs_list": [ 00:08:13.555 { 00:08:13.555 "name": "BaseBdev1", 00:08:13.555 "uuid": "8b301b3c-0456-4c59-8d21-6259be49744a", 00:08:13.555 "is_configured": true, 00:08:13.555 "data_offset": 0, 00:08:13.555 "data_size": 65536 00:08:13.555 }, 00:08:13.555 { 00:08:13.555 "name": "BaseBdev2", 00:08:13.555 "uuid": "e5fe3898-cd4a-4d7b-8403-ae95d0dabd1d", 00:08:13.555 "is_configured": true, 00:08:13.555 "data_offset": 0, 00:08:13.555 "data_size": 65536 00:08:13.555 }, 00:08:13.555 { 00:08:13.555 "name": "BaseBdev3", 00:08:13.555 "uuid": "e4722746-f9ca-40f5-84b1-5be15357f375", 00:08:13.555 "is_configured": true, 00:08:13.555 "data_offset": 0, 00:08:13.555 "data_size": 65536 00:08:13.555 } 00:08:13.555 ] 00:08:13.555 } 00:08:13.555 } 00:08:13.555 }' 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:13.555 BaseBdev2 00:08:13.555 BaseBdev3' 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.555 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.556 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.816 [2024-10-09 01:28:12.476554] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:13.816 [2024-10-09 01:28:12.476684] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.816 [2024-10-09 01:28:12.476793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.816 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.817 "name": "Existed_Raid", 00:08:13.817 "uuid": "1dba2d3d-755b-41a5-a96e-8ec6724e7de2", 00:08:13.817 "strip_size_kb": 64, 00:08:13.817 "state": "offline", 00:08:13.817 "raid_level": "raid0", 00:08:13.817 "superblock": false, 00:08:13.817 "num_base_bdevs": 3, 00:08:13.817 "num_base_bdevs_discovered": 2, 00:08:13.817 "num_base_bdevs_operational": 2, 00:08:13.817 "base_bdevs_list": [ 00:08:13.817 { 00:08:13.817 "name": null, 00:08:13.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.817 "is_configured": false, 00:08:13.817 "data_offset": 0, 00:08:13.817 "data_size": 65536 00:08:13.817 }, 00:08:13.817 { 00:08:13.817 "name": "BaseBdev2", 00:08:13.817 "uuid": "e5fe3898-cd4a-4d7b-8403-ae95d0dabd1d", 00:08:13.817 "is_configured": true, 00:08:13.817 "data_offset": 0, 00:08:13.817 "data_size": 65536 00:08:13.817 }, 00:08:13.817 { 00:08:13.817 "name": "BaseBdev3", 00:08:13.817 "uuid": "e4722746-f9ca-40f5-84b1-5be15357f375", 00:08:13.817 "is_configured": true, 00:08:13.817 "data_offset": 0, 00:08:13.817 "data_size": 65536 00:08:13.817 } 00:08:13.817 ] 00:08:13.817 }' 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.817 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.085 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.085 [2024-10-09 01:28:12.965510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:14.357 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:14.357 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.357 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.357 01:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:14.357 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 [2024-10-09 01:28:13.042889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.357 [2024-10-09 01:28:13.042954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 BaseBdev2 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 [ 00:08:14.357 { 00:08:14.357 "name": "BaseBdev2", 00:08:14.357 "aliases": [ 00:08:14.357 "8b51dcd4-6231-4413-9700-7206e5bfb8aa" 00:08:14.357 ], 00:08:14.357 "product_name": "Malloc disk", 00:08:14.357 "block_size": 512, 00:08:14.357 "num_blocks": 65536, 00:08:14.357 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:14.357 "assigned_rate_limits": { 00:08:14.357 "rw_ios_per_sec": 0, 00:08:14.357 "rw_mbytes_per_sec": 0, 00:08:14.357 "r_mbytes_per_sec": 0, 00:08:14.357 "w_mbytes_per_sec": 0 00:08:14.357 }, 00:08:14.357 "claimed": false, 00:08:14.357 "zoned": false, 00:08:14.357 "supported_io_types": { 00:08:14.357 "read": true, 00:08:14.357 "write": true, 00:08:14.357 "unmap": true, 00:08:14.357 "flush": true, 00:08:14.357 "reset": true, 00:08:14.357 "nvme_admin": false, 00:08:14.357 "nvme_io": false, 00:08:14.357 "nvme_io_md": false, 00:08:14.357 "write_zeroes": true, 00:08:14.357 "zcopy": true, 00:08:14.357 "get_zone_info": false, 00:08:14.357 "zone_management": false, 00:08:14.357 "zone_append": false, 00:08:14.357 "compare": false, 00:08:14.357 "compare_and_write": false, 00:08:14.357 "abort": true, 00:08:14.357 "seek_hole": false, 00:08:14.357 "seek_data": false, 00:08:14.357 "copy": true, 00:08:14.357 "nvme_iov_md": false 00:08:14.357 }, 00:08:14.357 "memory_domains": [ 00:08:14.357 { 00:08:14.357 "dma_device_id": "system", 00:08:14.357 "dma_device_type": 1 00:08:14.357 }, 00:08:14.357 { 00:08:14.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.357 "dma_device_type": 2 00:08:14.357 } 00:08:14.357 ], 00:08:14.357 "driver_specific": {} 00:08:14.357 } 00:08:14.357 ] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 BaseBdev3 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.357 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.357 [ 00:08:14.357 { 00:08:14.357 "name": "BaseBdev3", 00:08:14.357 "aliases": [ 00:08:14.357 "5b93a3ac-6e6b-4821-a692-c8e4e0813f82" 00:08:14.357 ], 00:08:14.357 "product_name": "Malloc disk", 00:08:14.357 "block_size": 512, 00:08:14.357 "num_blocks": 65536, 00:08:14.357 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:14.357 "assigned_rate_limits": { 00:08:14.357 "rw_ios_per_sec": 0, 00:08:14.357 "rw_mbytes_per_sec": 0, 00:08:14.357 "r_mbytes_per_sec": 0, 00:08:14.357 "w_mbytes_per_sec": 0 00:08:14.357 }, 00:08:14.357 "claimed": false, 00:08:14.357 "zoned": false, 00:08:14.357 "supported_io_types": { 00:08:14.357 "read": true, 00:08:14.357 "write": true, 00:08:14.357 "unmap": true, 00:08:14.357 "flush": true, 00:08:14.357 "reset": true, 00:08:14.357 "nvme_admin": false, 00:08:14.357 "nvme_io": false, 00:08:14.357 "nvme_io_md": false, 00:08:14.357 "write_zeroes": true, 00:08:14.357 "zcopy": true, 00:08:14.357 "get_zone_info": false, 00:08:14.357 "zone_management": false, 00:08:14.357 "zone_append": false, 00:08:14.357 "compare": false, 00:08:14.357 "compare_and_write": false, 00:08:14.357 "abort": true, 00:08:14.358 "seek_hole": false, 00:08:14.358 "seek_data": false, 00:08:14.358 "copy": true, 00:08:14.358 "nvme_iov_md": false 00:08:14.358 }, 00:08:14.358 "memory_domains": [ 00:08:14.358 { 00:08:14.358 "dma_device_id": "system", 00:08:14.358 "dma_device_type": 1 00:08:14.358 }, 00:08:14.358 { 00:08:14.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.358 "dma_device_type": 2 00:08:14.358 } 00:08:14.358 ], 00:08:14.358 "driver_specific": {} 00:08:14.358 } 00:08:14.358 ] 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.358 [2024-10-09 01:28:13.237128] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.358 [2024-10-09 01:28:13.237248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.358 [2024-10-09 01:28:13.237287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.358 [2024-10-09 01:28:13.239401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.358 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.617 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.617 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.617 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.617 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.617 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.617 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.617 "name": "Existed_Raid", 00:08:14.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.617 "strip_size_kb": 64, 00:08:14.617 "state": "configuring", 00:08:14.617 "raid_level": "raid0", 00:08:14.617 "superblock": false, 00:08:14.617 "num_base_bdevs": 3, 00:08:14.617 "num_base_bdevs_discovered": 2, 00:08:14.617 "num_base_bdevs_operational": 3, 00:08:14.617 "base_bdevs_list": [ 00:08:14.617 { 00:08:14.617 "name": "BaseBdev1", 00:08:14.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.617 "is_configured": false, 00:08:14.617 "data_offset": 0, 00:08:14.617 "data_size": 0 00:08:14.617 }, 00:08:14.617 { 00:08:14.617 "name": "BaseBdev2", 00:08:14.617 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:14.617 "is_configured": true, 00:08:14.617 "data_offset": 0, 00:08:14.617 "data_size": 65536 00:08:14.617 }, 00:08:14.617 { 00:08:14.617 "name": "BaseBdev3", 00:08:14.617 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:14.617 "is_configured": true, 00:08:14.617 "data_offset": 0, 00:08:14.617 "data_size": 65536 00:08:14.617 } 00:08:14.617 ] 00:08:14.617 }' 00:08:14.617 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.617 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.877 [2024-10-09 01:28:13.657203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.877 "name": "Existed_Raid", 00:08:14.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.877 "strip_size_kb": 64, 00:08:14.877 "state": "configuring", 00:08:14.877 "raid_level": "raid0", 00:08:14.877 "superblock": false, 00:08:14.877 "num_base_bdevs": 3, 00:08:14.877 "num_base_bdevs_discovered": 1, 00:08:14.877 "num_base_bdevs_operational": 3, 00:08:14.877 "base_bdevs_list": [ 00:08:14.877 { 00:08:14.877 "name": "BaseBdev1", 00:08:14.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.877 "is_configured": false, 00:08:14.877 "data_offset": 0, 00:08:14.877 "data_size": 0 00:08:14.877 }, 00:08:14.877 { 00:08:14.877 "name": null, 00:08:14.877 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:14.877 "is_configured": false, 00:08:14.877 "data_offset": 0, 00:08:14.877 "data_size": 65536 00:08:14.877 }, 00:08:14.877 { 00:08:14.877 "name": "BaseBdev3", 00:08:14.877 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:14.877 "is_configured": true, 00:08:14.877 "data_offset": 0, 00:08:14.877 "data_size": 65536 00:08:14.877 } 00:08:14.877 ] 00:08:14.877 }' 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.877 01:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.448 [2024-10-09 01:28:14.134403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.448 BaseBdev1 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.448 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.448 [ 00:08:15.448 { 00:08:15.448 "name": "BaseBdev1", 00:08:15.448 "aliases": [ 00:08:15.448 "b9f72567-c599-45bc-bbff-89c4a782fc8f" 00:08:15.448 ], 00:08:15.448 "product_name": "Malloc disk", 00:08:15.448 "block_size": 512, 00:08:15.448 "num_blocks": 65536, 00:08:15.448 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:15.448 "assigned_rate_limits": { 00:08:15.448 "rw_ios_per_sec": 0, 00:08:15.448 "rw_mbytes_per_sec": 0, 00:08:15.448 "r_mbytes_per_sec": 0, 00:08:15.448 "w_mbytes_per_sec": 0 00:08:15.448 }, 00:08:15.448 "claimed": true, 00:08:15.448 "claim_type": "exclusive_write", 00:08:15.448 "zoned": false, 00:08:15.448 "supported_io_types": { 00:08:15.448 "read": true, 00:08:15.448 "write": true, 00:08:15.448 "unmap": true, 00:08:15.448 "flush": true, 00:08:15.448 "reset": true, 00:08:15.448 "nvme_admin": false, 00:08:15.448 "nvme_io": false, 00:08:15.448 "nvme_io_md": false, 00:08:15.448 "write_zeroes": true, 00:08:15.448 "zcopy": true, 00:08:15.448 "get_zone_info": false, 00:08:15.448 "zone_management": false, 00:08:15.448 "zone_append": false, 00:08:15.448 "compare": false, 00:08:15.448 "compare_and_write": false, 00:08:15.448 "abort": true, 00:08:15.448 "seek_hole": false, 00:08:15.448 "seek_data": false, 00:08:15.448 "copy": true, 00:08:15.448 "nvme_iov_md": false 00:08:15.448 }, 00:08:15.448 "memory_domains": [ 00:08:15.448 { 00:08:15.448 "dma_device_id": "system", 00:08:15.448 "dma_device_type": 1 00:08:15.448 }, 00:08:15.448 { 00:08:15.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.448 "dma_device_type": 2 00:08:15.448 } 00:08:15.448 ], 00:08:15.448 "driver_specific": {} 00:08:15.448 } 00:08:15.448 ] 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.449 "name": "Existed_Raid", 00:08:15.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.449 "strip_size_kb": 64, 00:08:15.449 "state": "configuring", 00:08:15.449 "raid_level": "raid0", 00:08:15.449 "superblock": false, 00:08:15.449 "num_base_bdevs": 3, 00:08:15.449 "num_base_bdevs_discovered": 2, 00:08:15.449 "num_base_bdevs_operational": 3, 00:08:15.449 "base_bdevs_list": [ 00:08:15.449 { 00:08:15.449 "name": "BaseBdev1", 00:08:15.449 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:15.449 "is_configured": true, 00:08:15.449 "data_offset": 0, 00:08:15.449 "data_size": 65536 00:08:15.449 }, 00:08:15.449 { 00:08:15.449 "name": null, 00:08:15.449 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:15.449 "is_configured": false, 00:08:15.449 "data_offset": 0, 00:08:15.449 "data_size": 65536 00:08:15.449 }, 00:08:15.449 { 00:08:15.449 "name": "BaseBdev3", 00:08:15.449 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:15.449 "is_configured": true, 00:08:15.449 "data_offset": 0, 00:08:15.449 "data_size": 65536 00:08:15.449 } 00:08:15.449 ] 00:08:15.449 }' 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.449 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.020 [2024-10-09 01:28:14.674685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.020 "name": "Existed_Raid", 00:08:16.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.020 "strip_size_kb": 64, 00:08:16.020 "state": "configuring", 00:08:16.020 "raid_level": "raid0", 00:08:16.020 "superblock": false, 00:08:16.020 "num_base_bdevs": 3, 00:08:16.020 "num_base_bdevs_discovered": 1, 00:08:16.020 "num_base_bdevs_operational": 3, 00:08:16.020 "base_bdevs_list": [ 00:08:16.020 { 00:08:16.020 "name": "BaseBdev1", 00:08:16.020 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:16.020 "is_configured": true, 00:08:16.020 "data_offset": 0, 00:08:16.020 "data_size": 65536 00:08:16.020 }, 00:08:16.020 { 00:08:16.020 "name": null, 00:08:16.020 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:16.020 "is_configured": false, 00:08:16.020 "data_offset": 0, 00:08:16.020 "data_size": 65536 00:08:16.020 }, 00:08:16.020 { 00:08:16.020 "name": null, 00:08:16.020 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:16.020 "is_configured": false, 00:08:16.020 "data_offset": 0, 00:08:16.020 "data_size": 65536 00:08:16.020 } 00:08:16.020 ] 00:08:16.020 }' 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.020 01:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.280 [2024-10-09 01:28:15.114721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.280 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.280 "name": "Existed_Raid", 00:08:16.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.280 "strip_size_kb": 64, 00:08:16.280 "state": "configuring", 00:08:16.280 "raid_level": "raid0", 00:08:16.280 "superblock": false, 00:08:16.280 "num_base_bdevs": 3, 00:08:16.280 "num_base_bdevs_discovered": 2, 00:08:16.280 "num_base_bdevs_operational": 3, 00:08:16.280 "base_bdevs_list": [ 00:08:16.280 { 00:08:16.280 "name": "BaseBdev1", 00:08:16.280 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:16.280 "is_configured": true, 00:08:16.280 "data_offset": 0, 00:08:16.280 "data_size": 65536 00:08:16.280 }, 00:08:16.280 { 00:08:16.280 "name": null, 00:08:16.280 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:16.280 "is_configured": false, 00:08:16.280 "data_offset": 0, 00:08:16.280 "data_size": 65536 00:08:16.280 }, 00:08:16.280 { 00:08:16.280 "name": "BaseBdev3", 00:08:16.280 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:16.280 "is_configured": true, 00:08:16.280 "data_offset": 0, 00:08:16.280 "data_size": 65536 00:08:16.280 } 00:08:16.280 ] 00:08:16.280 }' 00:08:16.540 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.540 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.800 [2024-10-09 01:28:15.610881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.800 "name": "Existed_Raid", 00:08:16.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.800 "strip_size_kb": 64, 00:08:16.800 "state": "configuring", 00:08:16.800 "raid_level": "raid0", 00:08:16.800 "superblock": false, 00:08:16.800 "num_base_bdevs": 3, 00:08:16.800 "num_base_bdevs_discovered": 1, 00:08:16.800 "num_base_bdevs_operational": 3, 00:08:16.800 "base_bdevs_list": [ 00:08:16.800 { 00:08:16.800 "name": null, 00:08:16.800 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:16.800 "is_configured": false, 00:08:16.800 "data_offset": 0, 00:08:16.800 "data_size": 65536 00:08:16.800 }, 00:08:16.800 { 00:08:16.800 "name": null, 00:08:16.800 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:16.800 "is_configured": false, 00:08:16.800 "data_offset": 0, 00:08:16.800 "data_size": 65536 00:08:16.800 }, 00:08:16.800 { 00:08:16.800 "name": "BaseBdev3", 00:08:16.800 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:16.800 "is_configured": true, 00:08:16.800 "data_offset": 0, 00:08:16.800 "data_size": 65536 00:08:16.800 } 00:08:16.800 ] 00:08:16.800 }' 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.800 01:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 [2024-10-09 01:28:16.082991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.370 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.371 "name": "Existed_Raid", 00:08:17.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.371 "strip_size_kb": 64, 00:08:17.371 "state": "configuring", 00:08:17.371 "raid_level": "raid0", 00:08:17.371 "superblock": false, 00:08:17.371 "num_base_bdevs": 3, 00:08:17.371 "num_base_bdevs_discovered": 2, 00:08:17.371 "num_base_bdevs_operational": 3, 00:08:17.371 "base_bdevs_list": [ 00:08:17.371 { 00:08:17.371 "name": null, 00:08:17.371 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:17.371 "is_configured": false, 00:08:17.371 "data_offset": 0, 00:08:17.371 "data_size": 65536 00:08:17.371 }, 00:08:17.371 { 00:08:17.371 "name": "BaseBdev2", 00:08:17.371 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:17.371 "is_configured": true, 00:08:17.371 "data_offset": 0, 00:08:17.371 "data_size": 65536 00:08:17.371 }, 00:08:17.371 { 00:08:17.371 "name": "BaseBdev3", 00:08:17.371 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:17.371 "is_configured": true, 00:08:17.371 "data_offset": 0, 00:08:17.371 "data_size": 65536 00:08:17.371 } 00:08:17.371 ] 00:08:17.371 }' 00:08:17.371 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.371 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:17.631 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b9f72567-c599-45bc-bbff-89c4a782fc8f 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.892 [2024-10-09 01:28:16.548812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:17.892 [2024-10-09 01:28:16.548868] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:17.892 [2024-10-09 01:28:16.548876] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:17.892 [2024-10-09 01:28:16.549163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:17.892 [2024-10-09 01:28:16.549293] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:17.892 [2024-10-09 01:28:16.549310] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:17.892 [2024-10-09 01:28:16.549547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.892 NewBaseBdev 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.892 [ 00:08:17.892 { 00:08:17.892 "name": "NewBaseBdev", 00:08:17.892 "aliases": [ 00:08:17.892 "b9f72567-c599-45bc-bbff-89c4a782fc8f" 00:08:17.892 ], 00:08:17.892 "product_name": "Malloc disk", 00:08:17.892 "block_size": 512, 00:08:17.892 "num_blocks": 65536, 00:08:17.892 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:17.892 "assigned_rate_limits": { 00:08:17.892 "rw_ios_per_sec": 0, 00:08:17.892 "rw_mbytes_per_sec": 0, 00:08:17.892 "r_mbytes_per_sec": 0, 00:08:17.892 "w_mbytes_per_sec": 0 00:08:17.892 }, 00:08:17.892 "claimed": true, 00:08:17.892 "claim_type": "exclusive_write", 00:08:17.892 "zoned": false, 00:08:17.892 "supported_io_types": { 00:08:17.892 "read": true, 00:08:17.892 "write": true, 00:08:17.892 "unmap": true, 00:08:17.892 "flush": true, 00:08:17.892 "reset": true, 00:08:17.892 "nvme_admin": false, 00:08:17.892 "nvme_io": false, 00:08:17.892 "nvme_io_md": false, 00:08:17.892 "write_zeroes": true, 00:08:17.892 "zcopy": true, 00:08:17.892 "get_zone_info": false, 00:08:17.892 "zone_management": false, 00:08:17.892 "zone_append": false, 00:08:17.892 "compare": false, 00:08:17.892 "compare_and_write": false, 00:08:17.892 "abort": true, 00:08:17.892 "seek_hole": false, 00:08:17.892 "seek_data": false, 00:08:17.892 "copy": true, 00:08:17.892 "nvme_iov_md": false 00:08:17.892 }, 00:08:17.892 "memory_domains": [ 00:08:17.892 { 00:08:17.892 "dma_device_id": "system", 00:08:17.892 "dma_device_type": 1 00:08:17.892 }, 00:08:17.892 { 00:08:17.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.892 "dma_device_type": 2 00:08:17.892 } 00:08:17.892 ], 00:08:17.892 "driver_specific": {} 00:08:17.892 } 00:08:17.892 ] 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:17.892 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.893 "name": "Existed_Raid", 00:08:17.893 "uuid": "a053f655-d3c2-4e58-b955-843b03366052", 00:08:17.893 "strip_size_kb": 64, 00:08:17.893 "state": "online", 00:08:17.893 "raid_level": "raid0", 00:08:17.893 "superblock": false, 00:08:17.893 "num_base_bdevs": 3, 00:08:17.893 "num_base_bdevs_discovered": 3, 00:08:17.893 "num_base_bdevs_operational": 3, 00:08:17.893 "base_bdevs_list": [ 00:08:17.893 { 00:08:17.893 "name": "NewBaseBdev", 00:08:17.893 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:17.893 "is_configured": true, 00:08:17.893 "data_offset": 0, 00:08:17.893 "data_size": 65536 00:08:17.893 }, 00:08:17.893 { 00:08:17.893 "name": "BaseBdev2", 00:08:17.893 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:17.893 "is_configured": true, 00:08:17.893 "data_offset": 0, 00:08:17.893 "data_size": 65536 00:08:17.893 }, 00:08:17.893 { 00:08:17.893 "name": "BaseBdev3", 00:08:17.893 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:17.893 "is_configured": true, 00:08:17.893 "data_offset": 0, 00:08:17.893 "data_size": 65536 00:08:17.893 } 00:08:17.893 ] 00:08:17.893 }' 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.893 01:28:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.153 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.153 [2024-10-09 01:28:17.033339] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.413 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.413 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.413 "name": "Existed_Raid", 00:08:18.413 "aliases": [ 00:08:18.413 "a053f655-d3c2-4e58-b955-843b03366052" 00:08:18.413 ], 00:08:18.413 "product_name": "Raid Volume", 00:08:18.413 "block_size": 512, 00:08:18.413 "num_blocks": 196608, 00:08:18.413 "uuid": "a053f655-d3c2-4e58-b955-843b03366052", 00:08:18.413 "assigned_rate_limits": { 00:08:18.413 "rw_ios_per_sec": 0, 00:08:18.413 "rw_mbytes_per_sec": 0, 00:08:18.413 "r_mbytes_per_sec": 0, 00:08:18.413 "w_mbytes_per_sec": 0 00:08:18.413 }, 00:08:18.413 "claimed": false, 00:08:18.413 "zoned": false, 00:08:18.413 "supported_io_types": { 00:08:18.413 "read": true, 00:08:18.413 "write": true, 00:08:18.413 "unmap": true, 00:08:18.413 "flush": true, 00:08:18.413 "reset": true, 00:08:18.413 "nvme_admin": false, 00:08:18.413 "nvme_io": false, 00:08:18.413 "nvme_io_md": false, 00:08:18.413 "write_zeroes": true, 00:08:18.413 "zcopy": false, 00:08:18.413 "get_zone_info": false, 00:08:18.413 "zone_management": false, 00:08:18.413 "zone_append": false, 00:08:18.413 "compare": false, 00:08:18.413 "compare_and_write": false, 00:08:18.413 "abort": false, 00:08:18.413 "seek_hole": false, 00:08:18.413 "seek_data": false, 00:08:18.413 "copy": false, 00:08:18.413 "nvme_iov_md": false 00:08:18.413 }, 00:08:18.413 "memory_domains": [ 00:08:18.413 { 00:08:18.413 "dma_device_id": "system", 00:08:18.413 "dma_device_type": 1 00:08:18.413 }, 00:08:18.413 { 00:08:18.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.413 "dma_device_type": 2 00:08:18.413 }, 00:08:18.413 { 00:08:18.413 "dma_device_id": "system", 00:08:18.413 "dma_device_type": 1 00:08:18.413 }, 00:08:18.413 { 00:08:18.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.413 "dma_device_type": 2 00:08:18.413 }, 00:08:18.413 { 00:08:18.413 "dma_device_id": "system", 00:08:18.413 "dma_device_type": 1 00:08:18.413 }, 00:08:18.413 { 00:08:18.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.413 "dma_device_type": 2 00:08:18.413 } 00:08:18.413 ], 00:08:18.413 "driver_specific": { 00:08:18.413 "raid": { 00:08:18.413 "uuid": "a053f655-d3c2-4e58-b955-843b03366052", 00:08:18.413 "strip_size_kb": 64, 00:08:18.413 "state": "online", 00:08:18.413 "raid_level": "raid0", 00:08:18.413 "superblock": false, 00:08:18.413 "num_base_bdevs": 3, 00:08:18.413 "num_base_bdevs_discovered": 3, 00:08:18.413 "num_base_bdevs_operational": 3, 00:08:18.413 "base_bdevs_list": [ 00:08:18.413 { 00:08:18.413 "name": "NewBaseBdev", 00:08:18.413 "uuid": "b9f72567-c599-45bc-bbff-89c4a782fc8f", 00:08:18.413 "is_configured": true, 00:08:18.413 "data_offset": 0, 00:08:18.413 "data_size": 65536 00:08:18.413 }, 00:08:18.413 { 00:08:18.413 "name": "BaseBdev2", 00:08:18.413 "uuid": "8b51dcd4-6231-4413-9700-7206e5bfb8aa", 00:08:18.413 "is_configured": true, 00:08:18.413 "data_offset": 0, 00:08:18.413 "data_size": 65536 00:08:18.414 }, 00:08:18.414 { 00:08:18.414 "name": "BaseBdev3", 00:08:18.414 "uuid": "5b93a3ac-6e6b-4821-a692-c8e4e0813f82", 00:08:18.414 "is_configured": true, 00:08:18.414 "data_offset": 0, 00:08:18.414 "data_size": 65536 00:08:18.414 } 00:08:18.414 ] 00:08:18.414 } 00:08:18.414 } 00:08:18.414 }' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:18.414 BaseBdev2 00:08:18.414 BaseBdev3' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.414 [2024-10-09 01:28:17.276992] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.414 [2024-10-09 01:28:17.277066] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.414 [2024-10-09 01:28:17.277143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.414 [2024-10-09 01:28:17.277202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.414 [2024-10-09 01:28:17.277212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76144 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76144 ']' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76144 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.414 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76144 00:08:18.674 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.674 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.674 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76144' 00:08:18.674 killing process with pid 76144 00:08:18.674 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76144 00:08:18.674 [2024-10-09 01:28:17.315487] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.674 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76144 00:08:18.674 [2024-10-09 01:28:17.371836] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:18.934 ************************************ 00:08:18.934 END TEST raid_state_function_test 00:08:18.934 ************************************ 00:08:18.934 00:08:18.934 real 0m8.780s 00:08:18.934 user 0m14.610s 00:08:18.934 sys 0m1.926s 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.934 01:28:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:18.934 01:28:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:18.934 01:28:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.934 01:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.934 ************************************ 00:08:18.934 START TEST raid_state_function_test_sb 00:08:18.934 ************************************ 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:18.934 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=76743 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76743' 00:08:19.195 Process raid pid: 76743 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 76743 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 76743 ']' 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:19.195 01:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.195 [2024-10-09 01:28:17.918506] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:19.195 [2024-10-09 01:28:17.918734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.195 [2024-10-09 01:28:18.055862] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.195 [2024-10-09 01:28:18.083614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.455 [2024-10-09 01:28:18.162417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.455 [2024-10-09 01:28:18.242389] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.455 [2024-10-09 01:28:18.242529] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.025 [2024-10-09 01:28:18.745087] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.025 [2024-10-09 01:28:18.745156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.025 [2024-10-09 01:28:18.745173] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.025 [2024-10-09 01:28:18.745181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.025 [2024-10-09 01:28:18.745193] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.025 [2024-10-09 01:28:18.745199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.025 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.026 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.026 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.026 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.026 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.026 "name": "Existed_Raid", 00:08:20.026 "uuid": "189eddb2-a745-460c-952b-108f57e95951", 00:08:20.026 "strip_size_kb": 64, 00:08:20.026 "state": "configuring", 00:08:20.026 "raid_level": "raid0", 00:08:20.026 "superblock": true, 00:08:20.026 "num_base_bdevs": 3, 00:08:20.026 "num_base_bdevs_discovered": 0, 00:08:20.026 "num_base_bdevs_operational": 3, 00:08:20.026 "base_bdevs_list": [ 00:08:20.026 { 00:08:20.026 "name": "BaseBdev1", 00:08:20.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.026 "is_configured": false, 00:08:20.026 "data_offset": 0, 00:08:20.026 "data_size": 0 00:08:20.026 }, 00:08:20.026 { 00:08:20.026 "name": "BaseBdev2", 00:08:20.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.026 "is_configured": false, 00:08:20.026 "data_offset": 0, 00:08:20.026 "data_size": 0 00:08:20.026 }, 00:08:20.026 { 00:08:20.026 "name": "BaseBdev3", 00:08:20.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.026 "is_configured": false, 00:08:20.026 "data_offset": 0, 00:08:20.026 "data_size": 0 00:08:20.026 } 00:08:20.026 ] 00:08:20.026 }' 00:08:20.026 01:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.026 01:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.286 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:20.286 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.286 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.286 [2024-10-09 01:28:19.161097] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:20.286 [2024-10-09 01:28:19.161238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:20.286 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.286 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:20.286 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.286 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.286 [2024-10-09 01:28:19.173101] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:20.286 [2024-10-09 01:28:19.173220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:20.286 [2024-10-09 01:28:19.173251] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:20.286 [2024-10-09 01:28:19.173272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:20.286 [2024-10-09 01:28:19.173292] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:20.286 [2024-10-09 01:28:19.173311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:20.286 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.547 [2024-10-09 01:28:19.200587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.547 BaseBdev1 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.547 [ 00:08:20.547 { 00:08:20.547 "name": "BaseBdev1", 00:08:20.547 "aliases": [ 00:08:20.547 "3557f829-bbd5-4650-a86c-ac2cfefcbaa9" 00:08:20.547 ], 00:08:20.547 "product_name": "Malloc disk", 00:08:20.547 "block_size": 512, 00:08:20.547 "num_blocks": 65536, 00:08:20.547 "uuid": "3557f829-bbd5-4650-a86c-ac2cfefcbaa9", 00:08:20.547 "assigned_rate_limits": { 00:08:20.547 "rw_ios_per_sec": 0, 00:08:20.547 "rw_mbytes_per_sec": 0, 00:08:20.547 "r_mbytes_per_sec": 0, 00:08:20.547 "w_mbytes_per_sec": 0 00:08:20.547 }, 00:08:20.547 "claimed": true, 00:08:20.547 "claim_type": "exclusive_write", 00:08:20.547 "zoned": false, 00:08:20.547 "supported_io_types": { 00:08:20.547 "read": true, 00:08:20.547 "write": true, 00:08:20.547 "unmap": true, 00:08:20.547 "flush": true, 00:08:20.547 "reset": true, 00:08:20.547 "nvme_admin": false, 00:08:20.547 "nvme_io": false, 00:08:20.547 "nvme_io_md": false, 00:08:20.547 "write_zeroes": true, 00:08:20.547 "zcopy": true, 00:08:20.547 "get_zone_info": false, 00:08:20.547 "zone_management": false, 00:08:20.547 "zone_append": false, 00:08:20.547 "compare": false, 00:08:20.547 "compare_and_write": false, 00:08:20.547 "abort": true, 00:08:20.547 "seek_hole": false, 00:08:20.547 "seek_data": false, 00:08:20.547 "copy": true, 00:08:20.547 "nvme_iov_md": false 00:08:20.547 }, 00:08:20.547 "memory_domains": [ 00:08:20.547 { 00:08:20.547 "dma_device_id": "system", 00:08:20.547 "dma_device_type": 1 00:08:20.547 }, 00:08:20.547 { 00:08:20.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.547 "dma_device_type": 2 00:08:20.547 } 00:08:20.547 ], 00:08:20.547 "driver_specific": {} 00:08:20.547 } 00:08:20.547 ] 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.547 "name": "Existed_Raid", 00:08:20.547 "uuid": "aa50d84d-2e49-495c-bc87-d7440a6fc770", 00:08:20.547 "strip_size_kb": 64, 00:08:20.547 "state": "configuring", 00:08:20.547 "raid_level": "raid0", 00:08:20.547 "superblock": true, 00:08:20.547 "num_base_bdevs": 3, 00:08:20.547 "num_base_bdevs_discovered": 1, 00:08:20.547 "num_base_bdevs_operational": 3, 00:08:20.547 "base_bdevs_list": [ 00:08:20.547 { 00:08:20.547 "name": "BaseBdev1", 00:08:20.547 "uuid": "3557f829-bbd5-4650-a86c-ac2cfefcbaa9", 00:08:20.547 "is_configured": true, 00:08:20.547 "data_offset": 2048, 00:08:20.547 "data_size": 63488 00:08:20.547 }, 00:08:20.547 { 00:08:20.547 "name": "BaseBdev2", 00:08:20.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.547 "is_configured": false, 00:08:20.547 "data_offset": 0, 00:08:20.547 "data_size": 0 00:08:20.547 }, 00:08:20.547 { 00:08:20.547 "name": "BaseBdev3", 00:08:20.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.547 "is_configured": false, 00:08:20.547 "data_offset": 0, 00:08:20.547 "data_size": 0 00:08:20.547 } 00:08:20.547 ] 00:08:20.547 }' 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.547 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.807 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:20.807 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.068 [2024-10-09 01:28:19.704832] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.068 [2024-10-09 01:28:19.704995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.068 [2024-10-09 01:28:19.716804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.068 [2024-10-09 01:28:19.718986] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.068 [2024-10-09 01:28:19.719058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.068 [2024-10-09 01:28:19.719088] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.068 [2024-10-09 01:28:19.719109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.068 "name": "Existed_Raid", 00:08:21.068 "uuid": "a3f6b42d-e526-436e-8e0b-dd6f4a26bc74", 00:08:21.068 "strip_size_kb": 64, 00:08:21.068 "state": "configuring", 00:08:21.068 "raid_level": "raid0", 00:08:21.068 "superblock": true, 00:08:21.068 "num_base_bdevs": 3, 00:08:21.068 "num_base_bdevs_discovered": 1, 00:08:21.068 "num_base_bdevs_operational": 3, 00:08:21.068 "base_bdevs_list": [ 00:08:21.068 { 00:08:21.068 "name": "BaseBdev1", 00:08:21.068 "uuid": "3557f829-bbd5-4650-a86c-ac2cfefcbaa9", 00:08:21.068 "is_configured": true, 00:08:21.068 "data_offset": 2048, 00:08:21.068 "data_size": 63488 00:08:21.068 }, 00:08:21.068 { 00:08:21.068 "name": "BaseBdev2", 00:08:21.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.068 "is_configured": false, 00:08:21.068 "data_offset": 0, 00:08:21.068 "data_size": 0 00:08:21.068 }, 00:08:21.068 { 00:08:21.068 "name": "BaseBdev3", 00:08:21.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.068 "is_configured": false, 00:08:21.068 "data_offset": 0, 00:08:21.068 "data_size": 0 00:08:21.068 } 00:08:21.068 ] 00:08:21.068 }' 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.068 01:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.341 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:21.341 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.341 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.341 [2024-10-09 01:28:20.175922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.341 BaseBdev2 00:08:21.341 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.342 [ 00:08:21.342 { 00:08:21.342 "name": "BaseBdev2", 00:08:21.342 "aliases": [ 00:08:21.342 "a4f43116-8ff1-4379-a7f2-3ff90763cc09" 00:08:21.342 ], 00:08:21.342 "product_name": "Malloc disk", 00:08:21.342 "block_size": 512, 00:08:21.342 "num_blocks": 65536, 00:08:21.342 "uuid": "a4f43116-8ff1-4379-a7f2-3ff90763cc09", 00:08:21.342 "assigned_rate_limits": { 00:08:21.342 "rw_ios_per_sec": 0, 00:08:21.342 "rw_mbytes_per_sec": 0, 00:08:21.342 "r_mbytes_per_sec": 0, 00:08:21.342 "w_mbytes_per_sec": 0 00:08:21.342 }, 00:08:21.342 "claimed": true, 00:08:21.342 "claim_type": "exclusive_write", 00:08:21.342 "zoned": false, 00:08:21.342 "supported_io_types": { 00:08:21.342 "read": true, 00:08:21.342 "write": true, 00:08:21.342 "unmap": true, 00:08:21.342 "flush": true, 00:08:21.342 "reset": true, 00:08:21.342 "nvme_admin": false, 00:08:21.342 "nvme_io": false, 00:08:21.342 "nvme_io_md": false, 00:08:21.342 "write_zeroes": true, 00:08:21.342 "zcopy": true, 00:08:21.342 "get_zone_info": false, 00:08:21.342 "zone_management": false, 00:08:21.342 "zone_append": false, 00:08:21.342 "compare": false, 00:08:21.342 "compare_and_write": false, 00:08:21.342 "abort": true, 00:08:21.342 "seek_hole": false, 00:08:21.342 "seek_data": false, 00:08:21.342 "copy": true, 00:08:21.342 "nvme_iov_md": false 00:08:21.342 }, 00:08:21.342 "memory_domains": [ 00:08:21.342 { 00:08:21.342 "dma_device_id": "system", 00:08:21.342 "dma_device_type": 1 00:08:21.342 }, 00:08:21.342 { 00:08:21.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.342 "dma_device_type": 2 00:08:21.342 } 00:08:21.342 ], 00:08:21.342 "driver_specific": {} 00:08:21.342 } 00:08:21.342 ] 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.342 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.660 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.660 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.660 "name": "Existed_Raid", 00:08:21.660 "uuid": "a3f6b42d-e526-436e-8e0b-dd6f4a26bc74", 00:08:21.660 "strip_size_kb": 64, 00:08:21.660 "state": "configuring", 00:08:21.660 "raid_level": "raid0", 00:08:21.660 "superblock": true, 00:08:21.660 "num_base_bdevs": 3, 00:08:21.660 "num_base_bdevs_discovered": 2, 00:08:21.660 "num_base_bdevs_operational": 3, 00:08:21.660 "base_bdevs_list": [ 00:08:21.660 { 00:08:21.660 "name": "BaseBdev1", 00:08:21.660 "uuid": "3557f829-bbd5-4650-a86c-ac2cfefcbaa9", 00:08:21.660 "is_configured": true, 00:08:21.660 "data_offset": 2048, 00:08:21.660 "data_size": 63488 00:08:21.660 }, 00:08:21.660 { 00:08:21.660 "name": "BaseBdev2", 00:08:21.660 "uuid": "a4f43116-8ff1-4379-a7f2-3ff90763cc09", 00:08:21.660 "is_configured": true, 00:08:21.660 "data_offset": 2048, 00:08:21.660 "data_size": 63488 00:08:21.660 }, 00:08:21.660 { 00:08:21.660 "name": "BaseBdev3", 00:08:21.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.660 "is_configured": false, 00:08:21.660 "data_offset": 0, 00:08:21.660 "data_size": 0 00:08:21.660 } 00:08:21.660 ] 00:08:21.660 }' 00:08:21.660 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.660 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 [2024-10-09 01:28:20.600907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:21.921 [2024-10-09 01:28:20.601193] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.921 [2024-10-09 01:28:20.601256] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.921 BaseBdev3 00:08:21.921 [2024-10-09 01:28:20.601592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:21.921 [2024-10-09 01:28:20.601729] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.921 [2024-10-09 01:28:20.601796] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:21.921 [2024-10-09 01:28:20.601949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 [ 00:08:21.921 { 00:08:21.921 "name": "BaseBdev3", 00:08:21.921 "aliases": [ 00:08:21.921 "20f8c50d-5bf0-4c63-9148-acd7fb85efcf" 00:08:21.921 ], 00:08:21.921 "product_name": "Malloc disk", 00:08:21.921 "block_size": 512, 00:08:21.921 "num_blocks": 65536, 00:08:21.921 "uuid": "20f8c50d-5bf0-4c63-9148-acd7fb85efcf", 00:08:21.921 "assigned_rate_limits": { 00:08:21.921 "rw_ios_per_sec": 0, 00:08:21.921 "rw_mbytes_per_sec": 0, 00:08:21.921 "r_mbytes_per_sec": 0, 00:08:21.921 "w_mbytes_per_sec": 0 00:08:21.921 }, 00:08:21.921 "claimed": true, 00:08:21.921 "claim_type": "exclusive_write", 00:08:21.921 "zoned": false, 00:08:21.921 "supported_io_types": { 00:08:21.921 "read": true, 00:08:21.921 "write": true, 00:08:21.921 "unmap": true, 00:08:21.921 "flush": true, 00:08:21.921 "reset": true, 00:08:21.921 "nvme_admin": false, 00:08:21.921 "nvme_io": false, 00:08:21.921 "nvme_io_md": false, 00:08:21.921 "write_zeroes": true, 00:08:21.921 "zcopy": true, 00:08:21.921 "get_zone_info": false, 00:08:21.921 "zone_management": false, 00:08:21.921 "zone_append": false, 00:08:21.921 "compare": false, 00:08:21.921 "compare_and_write": false, 00:08:21.921 "abort": true, 00:08:21.921 "seek_hole": false, 00:08:21.921 "seek_data": false, 00:08:21.921 "copy": true, 00:08:21.921 "nvme_iov_md": false 00:08:21.921 }, 00:08:21.921 "memory_domains": [ 00:08:21.921 { 00:08:21.921 "dma_device_id": "system", 00:08:21.921 "dma_device_type": 1 00:08:21.921 }, 00:08:21.921 { 00:08:21.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.921 "dma_device_type": 2 00:08:21.921 } 00:08:21.921 ], 00:08:21.921 "driver_specific": {} 00:08:21.921 } 00:08:21.921 ] 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.921 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.921 "name": "Existed_Raid", 00:08:21.921 "uuid": "a3f6b42d-e526-436e-8e0b-dd6f4a26bc74", 00:08:21.921 "strip_size_kb": 64, 00:08:21.921 "state": "online", 00:08:21.921 "raid_level": "raid0", 00:08:21.921 "superblock": true, 00:08:21.921 "num_base_bdevs": 3, 00:08:21.921 "num_base_bdevs_discovered": 3, 00:08:21.921 "num_base_bdevs_operational": 3, 00:08:21.921 "base_bdevs_list": [ 00:08:21.921 { 00:08:21.921 "name": "BaseBdev1", 00:08:21.922 "uuid": "3557f829-bbd5-4650-a86c-ac2cfefcbaa9", 00:08:21.922 "is_configured": true, 00:08:21.922 "data_offset": 2048, 00:08:21.922 "data_size": 63488 00:08:21.922 }, 00:08:21.922 { 00:08:21.922 "name": "BaseBdev2", 00:08:21.922 "uuid": "a4f43116-8ff1-4379-a7f2-3ff90763cc09", 00:08:21.922 "is_configured": true, 00:08:21.922 "data_offset": 2048, 00:08:21.922 "data_size": 63488 00:08:21.922 }, 00:08:21.922 { 00:08:21.922 "name": "BaseBdev3", 00:08:21.922 "uuid": "20f8c50d-5bf0-4c63-9148-acd7fb85efcf", 00:08:21.922 "is_configured": true, 00:08:21.922 "data_offset": 2048, 00:08:21.922 "data_size": 63488 00:08:21.922 } 00:08:21.922 ] 00:08:21.922 }' 00:08:21.922 01:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.922 01:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.181 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.442 [2024-10-09 01:28:21.077507] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.442 "name": "Existed_Raid", 00:08:22.442 "aliases": [ 00:08:22.442 "a3f6b42d-e526-436e-8e0b-dd6f4a26bc74" 00:08:22.442 ], 00:08:22.442 "product_name": "Raid Volume", 00:08:22.442 "block_size": 512, 00:08:22.442 "num_blocks": 190464, 00:08:22.442 "uuid": "a3f6b42d-e526-436e-8e0b-dd6f4a26bc74", 00:08:22.442 "assigned_rate_limits": { 00:08:22.442 "rw_ios_per_sec": 0, 00:08:22.442 "rw_mbytes_per_sec": 0, 00:08:22.442 "r_mbytes_per_sec": 0, 00:08:22.442 "w_mbytes_per_sec": 0 00:08:22.442 }, 00:08:22.442 "claimed": false, 00:08:22.442 "zoned": false, 00:08:22.442 "supported_io_types": { 00:08:22.442 "read": true, 00:08:22.442 "write": true, 00:08:22.442 "unmap": true, 00:08:22.442 "flush": true, 00:08:22.442 "reset": true, 00:08:22.442 "nvme_admin": false, 00:08:22.442 "nvme_io": false, 00:08:22.442 "nvme_io_md": false, 00:08:22.442 "write_zeroes": true, 00:08:22.442 "zcopy": false, 00:08:22.442 "get_zone_info": false, 00:08:22.442 "zone_management": false, 00:08:22.442 "zone_append": false, 00:08:22.442 "compare": false, 00:08:22.442 "compare_and_write": false, 00:08:22.442 "abort": false, 00:08:22.442 "seek_hole": false, 00:08:22.442 "seek_data": false, 00:08:22.442 "copy": false, 00:08:22.442 "nvme_iov_md": false 00:08:22.442 }, 00:08:22.442 "memory_domains": [ 00:08:22.442 { 00:08:22.442 "dma_device_id": "system", 00:08:22.442 "dma_device_type": 1 00:08:22.442 }, 00:08:22.442 { 00:08:22.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.442 "dma_device_type": 2 00:08:22.442 }, 00:08:22.442 { 00:08:22.442 "dma_device_id": "system", 00:08:22.442 "dma_device_type": 1 00:08:22.442 }, 00:08:22.442 { 00:08:22.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.442 "dma_device_type": 2 00:08:22.442 }, 00:08:22.442 { 00:08:22.442 "dma_device_id": "system", 00:08:22.442 "dma_device_type": 1 00:08:22.442 }, 00:08:22.442 { 00:08:22.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.442 "dma_device_type": 2 00:08:22.442 } 00:08:22.442 ], 00:08:22.442 "driver_specific": { 00:08:22.442 "raid": { 00:08:22.442 "uuid": "a3f6b42d-e526-436e-8e0b-dd6f4a26bc74", 00:08:22.442 "strip_size_kb": 64, 00:08:22.442 "state": "online", 00:08:22.442 "raid_level": "raid0", 00:08:22.442 "superblock": true, 00:08:22.442 "num_base_bdevs": 3, 00:08:22.442 "num_base_bdevs_discovered": 3, 00:08:22.442 "num_base_bdevs_operational": 3, 00:08:22.442 "base_bdevs_list": [ 00:08:22.442 { 00:08:22.442 "name": "BaseBdev1", 00:08:22.442 "uuid": "3557f829-bbd5-4650-a86c-ac2cfefcbaa9", 00:08:22.442 "is_configured": true, 00:08:22.442 "data_offset": 2048, 00:08:22.442 "data_size": 63488 00:08:22.442 }, 00:08:22.442 { 00:08:22.442 "name": "BaseBdev2", 00:08:22.442 "uuid": "a4f43116-8ff1-4379-a7f2-3ff90763cc09", 00:08:22.442 "is_configured": true, 00:08:22.442 "data_offset": 2048, 00:08:22.442 "data_size": 63488 00:08:22.442 }, 00:08:22.442 { 00:08:22.442 "name": "BaseBdev3", 00:08:22.442 "uuid": "20f8c50d-5bf0-4c63-9148-acd7fb85efcf", 00:08:22.442 "is_configured": true, 00:08:22.442 "data_offset": 2048, 00:08:22.442 "data_size": 63488 00:08:22.442 } 00:08:22.442 ] 00:08:22.442 } 00:08:22.442 } 00:08:22.442 }' 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:22.442 BaseBdev2 00:08:22.442 BaseBdev3' 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.442 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.443 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.443 [2024-10-09 01:28:21.333281] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:22.443 [2024-10-09 01:28:21.333332] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.443 [2024-10-09 01:28:21.333399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.703 "name": "Existed_Raid", 00:08:22.703 "uuid": "a3f6b42d-e526-436e-8e0b-dd6f4a26bc74", 00:08:22.703 "strip_size_kb": 64, 00:08:22.703 "state": "offline", 00:08:22.703 "raid_level": "raid0", 00:08:22.703 "superblock": true, 00:08:22.703 "num_base_bdevs": 3, 00:08:22.703 "num_base_bdevs_discovered": 2, 00:08:22.703 "num_base_bdevs_operational": 2, 00:08:22.703 "base_bdevs_list": [ 00:08:22.703 { 00:08:22.703 "name": null, 00:08:22.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.703 "is_configured": false, 00:08:22.703 "data_offset": 0, 00:08:22.703 "data_size": 63488 00:08:22.703 }, 00:08:22.703 { 00:08:22.703 "name": "BaseBdev2", 00:08:22.703 "uuid": "a4f43116-8ff1-4379-a7f2-3ff90763cc09", 00:08:22.703 "is_configured": true, 00:08:22.703 "data_offset": 2048, 00:08:22.703 "data_size": 63488 00:08:22.703 }, 00:08:22.703 { 00:08:22.703 "name": "BaseBdev3", 00:08:22.703 "uuid": "20f8c50d-5bf0-4c63-9148-acd7fb85efcf", 00:08:22.703 "is_configured": true, 00:08:22.703 "data_offset": 2048, 00:08:22.703 "data_size": 63488 00:08:22.703 } 00:08:22.703 ] 00:08:22.703 }' 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.703 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.963 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:22.963 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.963 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.963 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.963 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.963 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.963 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 [2024-10-09 01:28:21.863020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 [2024-10-09 01:28:21.943562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:23.224 [2024-10-09 01:28:21.943628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 01:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 BaseBdev2 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.224 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.224 [ 00:08:23.224 { 00:08:23.224 "name": "BaseBdev2", 00:08:23.224 "aliases": [ 00:08:23.224 "3445cbc7-ae34-499b-8ab2-258b6ace2690" 00:08:23.224 ], 00:08:23.224 "product_name": "Malloc disk", 00:08:23.224 "block_size": 512, 00:08:23.224 "num_blocks": 65536, 00:08:23.224 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:23.224 "assigned_rate_limits": { 00:08:23.224 "rw_ios_per_sec": 0, 00:08:23.224 "rw_mbytes_per_sec": 0, 00:08:23.224 "r_mbytes_per_sec": 0, 00:08:23.225 "w_mbytes_per_sec": 0 00:08:23.225 }, 00:08:23.225 "claimed": false, 00:08:23.225 "zoned": false, 00:08:23.225 "supported_io_types": { 00:08:23.225 "read": true, 00:08:23.225 "write": true, 00:08:23.225 "unmap": true, 00:08:23.225 "flush": true, 00:08:23.225 "reset": true, 00:08:23.225 "nvme_admin": false, 00:08:23.225 "nvme_io": false, 00:08:23.225 "nvme_io_md": false, 00:08:23.225 "write_zeroes": true, 00:08:23.225 "zcopy": true, 00:08:23.225 "get_zone_info": false, 00:08:23.225 "zone_management": false, 00:08:23.225 "zone_append": false, 00:08:23.225 "compare": false, 00:08:23.225 "compare_and_write": false, 00:08:23.225 "abort": true, 00:08:23.225 "seek_hole": false, 00:08:23.225 "seek_data": false, 00:08:23.225 "copy": true, 00:08:23.225 "nvme_iov_md": false 00:08:23.225 }, 00:08:23.225 "memory_domains": [ 00:08:23.225 { 00:08:23.225 "dma_device_id": "system", 00:08:23.225 "dma_device_type": 1 00:08:23.225 }, 00:08:23.225 { 00:08:23.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.225 "dma_device_type": 2 00:08:23.225 } 00:08:23.225 ], 00:08:23.225 "driver_specific": {} 00:08:23.225 } 00:08:23.225 ] 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.225 BaseBdev3 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.225 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.485 [ 00:08:23.485 { 00:08:23.485 "name": "BaseBdev3", 00:08:23.485 "aliases": [ 00:08:23.485 "629bb432-8447-4369-8b08-3137e5c9ffba" 00:08:23.485 ], 00:08:23.485 "product_name": "Malloc disk", 00:08:23.485 "block_size": 512, 00:08:23.485 "num_blocks": 65536, 00:08:23.485 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:23.485 "assigned_rate_limits": { 00:08:23.485 "rw_ios_per_sec": 0, 00:08:23.485 "rw_mbytes_per_sec": 0, 00:08:23.485 "r_mbytes_per_sec": 0, 00:08:23.485 "w_mbytes_per_sec": 0 00:08:23.485 }, 00:08:23.485 "claimed": false, 00:08:23.485 "zoned": false, 00:08:23.486 "supported_io_types": { 00:08:23.486 "read": true, 00:08:23.486 "write": true, 00:08:23.486 "unmap": true, 00:08:23.486 "flush": true, 00:08:23.486 "reset": true, 00:08:23.486 "nvme_admin": false, 00:08:23.486 "nvme_io": false, 00:08:23.486 "nvme_io_md": false, 00:08:23.486 "write_zeroes": true, 00:08:23.486 "zcopy": true, 00:08:23.486 "get_zone_info": false, 00:08:23.486 "zone_management": false, 00:08:23.486 "zone_append": false, 00:08:23.486 "compare": false, 00:08:23.486 "compare_and_write": false, 00:08:23.486 "abort": true, 00:08:23.486 "seek_hole": false, 00:08:23.486 "seek_data": false, 00:08:23.486 "copy": true, 00:08:23.486 "nvme_iov_md": false 00:08:23.486 }, 00:08:23.486 "memory_domains": [ 00:08:23.486 { 00:08:23.486 "dma_device_id": "system", 00:08:23.486 "dma_device_type": 1 00:08:23.486 }, 00:08:23.486 { 00:08:23.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.486 "dma_device_type": 2 00:08:23.486 } 00:08:23.486 ], 00:08:23.486 "driver_specific": {} 00:08:23.486 } 00:08:23.486 ] 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.486 [2024-10-09 01:28:22.138279] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.486 [2024-10-09 01:28:22.138407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.486 [2024-10-09 01:28:22.138449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.486 [2024-10-09 01:28:22.140618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.486 "name": "Existed_Raid", 00:08:23.486 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:23.486 "strip_size_kb": 64, 00:08:23.486 "state": "configuring", 00:08:23.486 "raid_level": "raid0", 00:08:23.486 "superblock": true, 00:08:23.486 "num_base_bdevs": 3, 00:08:23.486 "num_base_bdevs_discovered": 2, 00:08:23.486 "num_base_bdevs_operational": 3, 00:08:23.486 "base_bdevs_list": [ 00:08:23.486 { 00:08:23.486 "name": "BaseBdev1", 00:08:23.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.486 "is_configured": false, 00:08:23.486 "data_offset": 0, 00:08:23.486 "data_size": 0 00:08:23.486 }, 00:08:23.486 { 00:08:23.486 "name": "BaseBdev2", 00:08:23.486 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:23.486 "is_configured": true, 00:08:23.486 "data_offset": 2048, 00:08:23.486 "data_size": 63488 00:08:23.486 }, 00:08:23.486 { 00:08:23.486 "name": "BaseBdev3", 00:08:23.486 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:23.486 "is_configured": true, 00:08:23.486 "data_offset": 2048, 00:08:23.486 "data_size": 63488 00:08:23.486 } 00:08:23.486 ] 00:08:23.486 }' 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.486 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.746 [2024-10-09 01:28:22.546346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.746 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.747 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.747 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.747 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.747 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.747 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.747 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.747 "name": "Existed_Raid", 00:08:23.747 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:23.747 "strip_size_kb": 64, 00:08:23.747 "state": "configuring", 00:08:23.747 "raid_level": "raid0", 00:08:23.747 "superblock": true, 00:08:23.747 "num_base_bdevs": 3, 00:08:23.747 "num_base_bdevs_discovered": 1, 00:08:23.747 "num_base_bdevs_operational": 3, 00:08:23.747 "base_bdevs_list": [ 00:08:23.747 { 00:08:23.747 "name": "BaseBdev1", 00:08:23.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.747 "is_configured": false, 00:08:23.747 "data_offset": 0, 00:08:23.747 "data_size": 0 00:08:23.747 }, 00:08:23.747 { 00:08:23.747 "name": null, 00:08:23.747 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:23.747 "is_configured": false, 00:08:23.747 "data_offset": 0, 00:08:23.747 "data_size": 63488 00:08:23.747 }, 00:08:23.747 { 00:08:23.747 "name": "BaseBdev3", 00:08:23.747 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:23.747 "is_configured": true, 00:08:23.747 "data_offset": 2048, 00:08:23.747 "data_size": 63488 00:08:23.747 } 00:08:23.747 ] 00:08:23.747 }' 00:08:23.747 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.747 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.317 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.317 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 01:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:24.317 01:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 [2024-10-09 01:28:23.043981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.317 BaseBdev1 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.317 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 [ 00:08:24.317 { 00:08:24.317 "name": "BaseBdev1", 00:08:24.317 "aliases": [ 00:08:24.317 "5c2d0031-1bb5-42d5-8eba-5574e8153259" 00:08:24.317 ], 00:08:24.317 "product_name": "Malloc disk", 00:08:24.317 "block_size": 512, 00:08:24.317 "num_blocks": 65536, 00:08:24.317 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:24.317 "assigned_rate_limits": { 00:08:24.317 "rw_ios_per_sec": 0, 00:08:24.318 "rw_mbytes_per_sec": 0, 00:08:24.318 "r_mbytes_per_sec": 0, 00:08:24.318 "w_mbytes_per_sec": 0 00:08:24.318 }, 00:08:24.318 "claimed": true, 00:08:24.318 "claim_type": "exclusive_write", 00:08:24.318 "zoned": false, 00:08:24.318 "supported_io_types": { 00:08:24.318 "read": true, 00:08:24.318 "write": true, 00:08:24.318 "unmap": true, 00:08:24.318 "flush": true, 00:08:24.318 "reset": true, 00:08:24.318 "nvme_admin": false, 00:08:24.318 "nvme_io": false, 00:08:24.318 "nvme_io_md": false, 00:08:24.318 "write_zeroes": true, 00:08:24.318 "zcopy": true, 00:08:24.318 "get_zone_info": false, 00:08:24.318 "zone_management": false, 00:08:24.318 "zone_append": false, 00:08:24.318 "compare": false, 00:08:24.318 "compare_and_write": false, 00:08:24.318 "abort": true, 00:08:24.318 "seek_hole": false, 00:08:24.318 "seek_data": false, 00:08:24.318 "copy": true, 00:08:24.318 "nvme_iov_md": false 00:08:24.318 }, 00:08:24.318 "memory_domains": [ 00:08:24.318 { 00:08:24.318 "dma_device_id": "system", 00:08:24.318 "dma_device_type": 1 00:08:24.318 }, 00:08:24.318 { 00:08:24.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.318 "dma_device_type": 2 00:08:24.318 } 00:08:24.318 ], 00:08:24.318 "driver_specific": {} 00:08:24.318 } 00:08:24.318 ] 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.318 "name": "Existed_Raid", 00:08:24.318 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:24.318 "strip_size_kb": 64, 00:08:24.318 "state": "configuring", 00:08:24.318 "raid_level": "raid0", 00:08:24.318 "superblock": true, 00:08:24.318 "num_base_bdevs": 3, 00:08:24.318 "num_base_bdevs_discovered": 2, 00:08:24.318 "num_base_bdevs_operational": 3, 00:08:24.318 "base_bdevs_list": [ 00:08:24.318 { 00:08:24.318 "name": "BaseBdev1", 00:08:24.318 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:24.318 "is_configured": true, 00:08:24.318 "data_offset": 2048, 00:08:24.318 "data_size": 63488 00:08:24.318 }, 00:08:24.318 { 00:08:24.318 "name": null, 00:08:24.318 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:24.318 "is_configured": false, 00:08:24.318 "data_offset": 0, 00:08:24.318 "data_size": 63488 00:08:24.318 }, 00:08:24.318 { 00:08:24.318 "name": "BaseBdev3", 00:08:24.318 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:24.318 "is_configured": true, 00:08:24.318 "data_offset": 2048, 00:08:24.318 "data_size": 63488 00:08:24.318 } 00:08:24.318 ] 00:08:24.318 }' 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.318 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.889 [2024-10-09 01:28:23.564228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.889 "name": "Existed_Raid", 00:08:24.889 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:24.889 "strip_size_kb": 64, 00:08:24.889 "state": "configuring", 00:08:24.889 "raid_level": "raid0", 00:08:24.889 "superblock": true, 00:08:24.889 "num_base_bdevs": 3, 00:08:24.889 "num_base_bdevs_discovered": 1, 00:08:24.889 "num_base_bdevs_operational": 3, 00:08:24.889 "base_bdevs_list": [ 00:08:24.889 { 00:08:24.889 "name": "BaseBdev1", 00:08:24.889 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:24.889 "is_configured": true, 00:08:24.889 "data_offset": 2048, 00:08:24.889 "data_size": 63488 00:08:24.889 }, 00:08:24.889 { 00:08:24.889 "name": null, 00:08:24.889 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:24.889 "is_configured": false, 00:08:24.889 "data_offset": 0, 00:08:24.889 "data_size": 63488 00:08:24.889 }, 00:08:24.889 { 00:08:24.889 "name": null, 00:08:24.889 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:24.889 "is_configured": false, 00:08:24.889 "data_offset": 0, 00:08:24.889 "data_size": 63488 00:08:24.889 } 00:08:24.889 ] 00:08:24.889 }' 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.889 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.149 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.149 01:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.149 01:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.149 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.149 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.409 [2024-10-09 01:28:24.052316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.409 "name": "Existed_Raid", 00:08:25.409 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:25.409 "strip_size_kb": 64, 00:08:25.409 "state": "configuring", 00:08:25.409 "raid_level": "raid0", 00:08:25.409 "superblock": true, 00:08:25.409 "num_base_bdevs": 3, 00:08:25.409 "num_base_bdevs_discovered": 2, 00:08:25.409 "num_base_bdevs_operational": 3, 00:08:25.409 "base_bdevs_list": [ 00:08:25.409 { 00:08:25.409 "name": "BaseBdev1", 00:08:25.409 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:25.409 "is_configured": true, 00:08:25.409 "data_offset": 2048, 00:08:25.409 "data_size": 63488 00:08:25.409 }, 00:08:25.409 { 00:08:25.409 "name": null, 00:08:25.409 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:25.409 "is_configured": false, 00:08:25.409 "data_offset": 0, 00:08:25.409 "data_size": 63488 00:08:25.409 }, 00:08:25.409 { 00:08:25.409 "name": "BaseBdev3", 00:08:25.409 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:25.409 "is_configured": true, 00:08:25.409 "data_offset": 2048, 00:08:25.409 "data_size": 63488 00:08:25.409 } 00:08:25.409 ] 00:08:25.409 }' 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.409 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.669 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.669 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.669 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.669 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.670 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.670 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:25.670 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.670 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.670 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.670 [2024-10-09 01:28:24.548535] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.930 "name": "Existed_Raid", 00:08:25.930 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:25.930 "strip_size_kb": 64, 00:08:25.930 "state": "configuring", 00:08:25.930 "raid_level": "raid0", 00:08:25.930 "superblock": true, 00:08:25.930 "num_base_bdevs": 3, 00:08:25.930 "num_base_bdevs_discovered": 1, 00:08:25.930 "num_base_bdevs_operational": 3, 00:08:25.930 "base_bdevs_list": [ 00:08:25.930 { 00:08:25.930 "name": null, 00:08:25.930 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:25.930 "is_configured": false, 00:08:25.930 "data_offset": 0, 00:08:25.930 "data_size": 63488 00:08:25.930 }, 00:08:25.930 { 00:08:25.930 "name": null, 00:08:25.930 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:25.930 "is_configured": false, 00:08:25.930 "data_offset": 0, 00:08:25.930 "data_size": 63488 00:08:25.930 }, 00:08:25.930 { 00:08:25.930 "name": "BaseBdev3", 00:08:25.930 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:25.930 "is_configured": true, 00:08:25.930 "data_offset": 2048, 00:08:25.930 "data_size": 63488 00:08:25.930 } 00:08:25.930 ] 00:08:25.930 }' 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.930 01:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.190 [2024-10-09 01:28:25.056642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.190 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.449 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.449 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.449 "name": "Existed_Raid", 00:08:26.449 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:26.449 "strip_size_kb": 64, 00:08:26.449 "state": "configuring", 00:08:26.449 "raid_level": "raid0", 00:08:26.449 "superblock": true, 00:08:26.449 "num_base_bdevs": 3, 00:08:26.449 "num_base_bdevs_discovered": 2, 00:08:26.449 "num_base_bdevs_operational": 3, 00:08:26.449 "base_bdevs_list": [ 00:08:26.449 { 00:08:26.449 "name": null, 00:08:26.449 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:26.449 "is_configured": false, 00:08:26.449 "data_offset": 0, 00:08:26.449 "data_size": 63488 00:08:26.449 }, 00:08:26.449 { 00:08:26.449 "name": "BaseBdev2", 00:08:26.449 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:26.449 "is_configured": true, 00:08:26.449 "data_offset": 2048, 00:08:26.449 "data_size": 63488 00:08:26.449 }, 00:08:26.449 { 00:08:26.449 "name": "BaseBdev3", 00:08:26.449 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:26.449 "is_configured": true, 00:08:26.449 "data_offset": 2048, 00:08:26.449 "data_size": 63488 00:08:26.449 } 00:08:26.449 ] 00:08:26.450 }' 00:08:26.450 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.450 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5c2d0031-1bb5-42d5-8eba-5574e8153259 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.710 [2024-10-09 01:28:25.577509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:26.710 [2024-10-09 01:28:25.577727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:26.710 [2024-10-09 01:28:25.577741] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.710 [2024-10-09 01:28:25.578016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:26.710 NewBaseBdev 00:08:26.710 [2024-10-09 01:28:25.578145] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:26.710 [2024-10-09 01:28:25.578162] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:26.710 [2024-10-09 01:28:25.578263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:26.710 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.711 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.970 [ 00:08:26.970 { 00:08:26.970 "name": "NewBaseBdev", 00:08:26.970 "aliases": [ 00:08:26.970 "5c2d0031-1bb5-42d5-8eba-5574e8153259" 00:08:26.970 ], 00:08:26.970 "product_name": "Malloc disk", 00:08:26.970 "block_size": 512, 00:08:26.970 "num_blocks": 65536, 00:08:26.970 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:26.970 "assigned_rate_limits": { 00:08:26.970 "rw_ios_per_sec": 0, 00:08:26.970 "rw_mbytes_per_sec": 0, 00:08:26.970 "r_mbytes_per_sec": 0, 00:08:26.970 "w_mbytes_per_sec": 0 00:08:26.970 }, 00:08:26.970 "claimed": true, 00:08:26.970 "claim_type": "exclusive_write", 00:08:26.970 "zoned": false, 00:08:26.970 "supported_io_types": { 00:08:26.970 "read": true, 00:08:26.970 "write": true, 00:08:26.970 "unmap": true, 00:08:26.970 "flush": true, 00:08:26.970 "reset": true, 00:08:26.970 "nvme_admin": false, 00:08:26.970 "nvme_io": false, 00:08:26.970 "nvme_io_md": false, 00:08:26.970 "write_zeroes": true, 00:08:26.970 "zcopy": true, 00:08:26.970 "get_zone_info": false, 00:08:26.970 "zone_management": false, 00:08:26.970 "zone_append": false, 00:08:26.970 "compare": false, 00:08:26.970 "compare_and_write": false, 00:08:26.970 "abort": true, 00:08:26.970 "seek_hole": false, 00:08:26.970 "seek_data": false, 00:08:26.970 "copy": true, 00:08:26.970 "nvme_iov_md": false 00:08:26.970 }, 00:08:26.970 "memory_domains": [ 00:08:26.970 { 00:08:26.970 "dma_device_id": "system", 00:08:26.970 "dma_device_type": 1 00:08:26.970 }, 00:08:26.970 { 00:08:26.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.970 "dma_device_type": 2 00:08:26.970 } 00:08:26.970 ], 00:08:26.970 "driver_specific": {} 00:08:26.970 } 00:08:26.970 ] 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.970 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.971 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.971 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.971 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.971 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.971 "name": "Existed_Raid", 00:08:26.971 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:26.971 "strip_size_kb": 64, 00:08:26.971 "state": "online", 00:08:26.971 "raid_level": "raid0", 00:08:26.971 "superblock": true, 00:08:26.971 "num_base_bdevs": 3, 00:08:26.971 "num_base_bdevs_discovered": 3, 00:08:26.971 "num_base_bdevs_operational": 3, 00:08:26.971 "base_bdevs_list": [ 00:08:26.971 { 00:08:26.971 "name": "NewBaseBdev", 00:08:26.971 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:26.971 "is_configured": true, 00:08:26.971 "data_offset": 2048, 00:08:26.971 "data_size": 63488 00:08:26.971 }, 00:08:26.971 { 00:08:26.971 "name": "BaseBdev2", 00:08:26.971 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:26.971 "is_configured": true, 00:08:26.971 "data_offset": 2048, 00:08:26.971 "data_size": 63488 00:08:26.971 }, 00:08:26.971 { 00:08:26.971 "name": "BaseBdev3", 00:08:26.971 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:26.971 "is_configured": true, 00:08:26.971 "data_offset": 2048, 00:08:26.971 "data_size": 63488 00:08:26.971 } 00:08:26.971 ] 00:08:26.971 }' 00:08:26.971 01:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.971 01:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.230 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.230 [2024-10-09 01:28:26.110079] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.491 "name": "Existed_Raid", 00:08:27.491 "aliases": [ 00:08:27.491 "176d6388-e802-4567-8501-5c63c67c6836" 00:08:27.491 ], 00:08:27.491 "product_name": "Raid Volume", 00:08:27.491 "block_size": 512, 00:08:27.491 "num_blocks": 190464, 00:08:27.491 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:27.491 "assigned_rate_limits": { 00:08:27.491 "rw_ios_per_sec": 0, 00:08:27.491 "rw_mbytes_per_sec": 0, 00:08:27.491 "r_mbytes_per_sec": 0, 00:08:27.491 "w_mbytes_per_sec": 0 00:08:27.491 }, 00:08:27.491 "claimed": false, 00:08:27.491 "zoned": false, 00:08:27.491 "supported_io_types": { 00:08:27.491 "read": true, 00:08:27.491 "write": true, 00:08:27.491 "unmap": true, 00:08:27.491 "flush": true, 00:08:27.491 "reset": true, 00:08:27.491 "nvme_admin": false, 00:08:27.491 "nvme_io": false, 00:08:27.491 "nvme_io_md": false, 00:08:27.491 "write_zeroes": true, 00:08:27.491 "zcopy": false, 00:08:27.491 "get_zone_info": false, 00:08:27.491 "zone_management": false, 00:08:27.491 "zone_append": false, 00:08:27.491 "compare": false, 00:08:27.491 "compare_and_write": false, 00:08:27.491 "abort": false, 00:08:27.491 "seek_hole": false, 00:08:27.491 "seek_data": false, 00:08:27.491 "copy": false, 00:08:27.491 "nvme_iov_md": false 00:08:27.491 }, 00:08:27.491 "memory_domains": [ 00:08:27.491 { 00:08:27.491 "dma_device_id": "system", 00:08:27.491 "dma_device_type": 1 00:08:27.491 }, 00:08:27.491 { 00:08:27.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.491 "dma_device_type": 2 00:08:27.491 }, 00:08:27.491 { 00:08:27.491 "dma_device_id": "system", 00:08:27.491 "dma_device_type": 1 00:08:27.491 }, 00:08:27.491 { 00:08:27.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.491 "dma_device_type": 2 00:08:27.491 }, 00:08:27.491 { 00:08:27.491 "dma_device_id": "system", 00:08:27.491 "dma_device_type": 1 00:08:27.491 }, 00:08:27.491 { 00:08:27.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.491 "dma_device_type": 2 00:08:27.491 } 00:08:27.491 ], 00:08:27.491 "driver_specific": { 00:08:27.491 "raid": { 00:08:27.491 "uuid": "176d6388-e802-4567-8501-5c63c67c6836", 00:08:27.491 "strip_size_kb": 64, 00:08:27.491 "state": "online", 00:08:27.491 "raid_level": "raid0", 00:08:27.491 "superblock": true, 00:08:27.491 "num_base_bdevs": 3, 00:08:27.491 "num_base_bdevs_discovered": 3, 00:08:27.491 "num_base_bdevs_operational": 3, 00:08:27.491 "base_bdevs_list": [ 00:08:27.491 { 00:08:27.491 "name": "NewBaseBdev", 00:08:27.491 "uuid": "5c2d0031-1bb5-42d5-8eba-5574e8153259", 00:08:27.491 "is_configured": true, 00:08:27.491 "data_offset": 2048, 00:08:27.491 "data_size": 63488 00:08:27.491 }, 00:08:27.491 { 00:08:27.491 "name": "BaseBdev2", 00:08:27.491 "uuid": "3445cbc7-ae34-499b-8ab2-258b6ace2690", 00:08:27.491 "is_configured": true, 00:08:27.491 "data_offset": 2048, 00:08:27.491 "data_size": 63488 00:08:27.491 }, 00:08:27.491 { 00:08:27.491 "name": "BaseBdev3", 00:08:27.491 "uuid": "629bb432-8447-4369-8b08-3137e5c9ffba", 00:08:27.491 "is_configured": true, 00:08:27.491 "data_offset": 2048, 00:08:27.491 "data_size": 63488 00:08:27.491 } 00:08:27.491 ] 00:08:27.491 } 00:08:27.491 } 00:08:27.491 }' 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:27.491 BaseBdev2 00:08:27.491 BaseBdev3' 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.491 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.492 [2024-10-09 01:28:26.341756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.492 [2024-10-09 01:28:26.341863] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.492 [2024-10-09 01:28:26.341944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.492 [2024-10-09 01:28:26.342008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.492 [2024-10-09 01:28:26.342019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 76743 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 76743 ']' 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 76743 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.492 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76743 00:08:27.751 killing process with pid 76743 00:08:27.751 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.751 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.751 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76743' 00:08:27.751 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 76743 00:08:27.751 [2024-10-09 01:28:26.389645] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.751 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 76743 00:08:27.751 [2024-10-09 01:28:26.448658] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.012 ************************************ 00:08:28.012 END TEST raid_state_function_test_sb 00:08:28.012 ************************************ 00:08:28.012 01:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:28.012 00:08:28.012 real 0m9.009s 00:08:28.012 user 0m15.098s 00:08:28.012 sys 0m1.848s 00:08:28.012 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.012 01:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.012 01:28:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:28.012 01:28:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:28.012 01:28:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.012 01:28:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.012 ************************************ 00:08:28.012 START TEST raid_superblock_test 00:08:28.012 ************************************ 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77352 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77352 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77352 ']' 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.012 01:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.291 [2024-10-09 01:28:26.974573] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:28.291 [2024-10-09 01:28:26.974713] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77352 ] 00:08:28.291 [2024-10-09 01:28:27.110183] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.291 [2024-10-09 01:28:27.139226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.555 [2024-10-09 01:28:27.215946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.555 [2024-10-09 01:28:27.292223] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.555 [2024-10-09 01:28:27.292262] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:29.124 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.125 malloc1 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.125 [2024-10-09 01:28:27.845012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:29.125 [2024-10-09 01:28:27.845179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.125 [2024-10-09 01:28:27.845233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:29.125 [2024-10-09 01:28:27.845273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.125 [2024-10-09 01:28:27.847847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.125 [2024-10-09 01:28:27.847932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:29.125 pt1 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.125 malloc2 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.125 [2024-10-09 01:28:27.897243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.125 [2024-10-09 01:28:27.897378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.125 [2024-10-09 01:28:27.897420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:29.125 [2024-10-09 01:28:27.897448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.125 [2024-10-09 01:28:27.900029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.125 [2024-10-09 01:28:27.900105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.125 pt2 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.125 malloc3 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.125 [2024-10-09 01:28:27.937095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:29.125 [2024-10-09 01:28:27.937225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.125 [2024-10-09 01:28:27.937285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:29.125 [2024-10-09 01:28:27.937316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.125 [2024-10-09 01:28:27.939832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.125 [2024-10-09 01:28:27.939906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:29.125 pt3 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.125 [2024-10-09 01:28:27.949168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:29.125 [2024-10-09 01:28:27.951337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.125 [2024-10-09 01:28:27.951448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:29.125 [2024-10-09 01:28:27.951643] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:29.125 [2024-10-09 01:28:27.951690] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:29.125 [2024-10-09 01:28:27.952002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:29.125 [2024-10-09 01:28:27.952204] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:29.125 [2024-10-09 01:28:27.952251] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:29.125 [2024-10-09 01:28:27.952461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.125 01:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.125 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.125 "name": "raid_bdev1", 00:08:29.125 "uuid": "0e945d63-1d5e-4758-9ca4-e902cc75e532", 00:08:29.125 "strip_size_kb": 64, 00:08:29.125 "state": "online", 00:08:29.125 "raid_level": "raid0", 00:08:29.125 "superblock": true, 00:08:29.125 "num_base_bdevs": 3, 00:08:29.125 "num_base_bdevs_discovered": 3, 00:08:29.125 "num_base_bdevs_operational": 3, 00:08:29.125 "base_bdevs_list": [ 00:08:29.125 { 00:08:29.125 "name": "pt1", 00:08:29.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.125 "is_configured": true, 00:08:29.125 "data_offset": 2048, 00:08:29.125 "data_size": 63488 00:08:29.125 }, 00:08:29.125 { 00:08:29.125 "name": "pt2", 00:08:29.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.125 "is_configured": true, 00:08:29.125 "data_offset": 2048, 00:08:29.125 "data_size": 63488 00:08:29.125 }, 00:08:29.125 { 00:08:29.125 "name": "pt3", 00:08:29.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.125 "is_configured": true, 00:08:29.125 "data_offset": 2048, 00:08:29.125 "data_size": 63488 00:08:29.125 } 00:08:29.125 ] 00:08:29.125 }' 00:08:29.125 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.125 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.695 [2024-10-09 01:28:28.433508] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.695 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.695 "name": "raid_bdev1", 00:08:29.695 "aliases": [ 00:08:29.695 "0e945d63-1d5e-4758-9ca4-e902cc75e532" 00:08:29.695 ], 00:08:29.695 "product_name": "Raid Volume", 00:08:29.695 "block_size": 512, 00:08:29.695 "num_blocks": 190464, 00:08:29.695 "uuid": "0e945d63-1d5e-4758-9ca4-e902cc75e532", 00:08:29.695 "assigned_rate_limits": { 00:08:29.695 "rw_ios_per_sec": 0, 00:08:29.695 "rw_mbytes_per_sec": 0, 00:08:29.695 "r_mbytes_per_sec": 0, 00:08:29.695 "w_mbytes_per_sec": 0 00:08:29.695 }, 00:08:29.695 "claimed": false, 00:08:29.695 "zoned": false, 00:08:29.695 "supported_io_types": { 00:08:29.695 "read": true, 00:08:29.695 "write": true, 00:08:29.695 "unmap": true, 00:08:29.695 "flush": true, 00:08:29.695 "reset": true, 00:08:29.695 "nvme_admin": false, 00:08:29.695 "nvme_io": false, 00:08:29.695 "nvme_io_md": false, 00:08:29.695 "write_zeroes": true, 00:08:29.695 "zcopy": false, 00:08:29.695 "get_zone_info": false, 00:08:29.695 "zone_management": false, 00:08:29.695 "zone_append": false, 00:08:29.695 "compare": false, 00:08:29.695 "compare_and_write": false, 00:08:29.695 "abort": false, 00:08:29.695 "seek_hole": false, 00:08:29.695 "seek_data": false, 00:08:29.695 "copy": false, 00:08:29.695 "nvme_iov_md": false 00:08:29.695 }, 00:08:29.695 "memory_domains": [ 00:08:29.695 { 00:08:29.695 "dma_device_id": "system", 00:08:29.696 "dma_device_type": 1 00:08:29.696 }, 00:08:29.696 { 00:08:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.696 "dma_device_type": 2 00:08:29.696 }, 00:08:29.696 { 00:08:29.696 "dma_device_id": "system", 00:08:29.696 "dma_device_type": 1 00:08:29.696 }, 00:08:29.696 { 00:08:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.696 "dma_device_type": 2 00:08:29.696 }, 00:08:29.696 { 00:08:29.696 "dma_device_id": "system", 00:08:29.696 "dma_device_type": 1 00:08:29.696 }, 00:08:29.696 { 00:08:29.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.696 "dma_device_type": 2 00:08:29.696 } 00:08:29.696 ], 00:08:29.696 "driver_specific": { 00:08:29.696 "raid": { 00:08:29.696 "uuid": "0e945d63-1d5e-4758-9ca4-e902cc75e532", 00:08:29.696 "strip_size_kb": 64, 00:08:29.696 "state": "online", 00:08:29.696 "raid_level": "raid0", 00:08:29.696 "superblock": true, 00:08:29.696 "num_base_bdevs": 3, 00:08:29.696 "num_base_bdevs_discovered": 3, 00:08:29.696 "num_base_bdevs_operational": 3, 00:08:29.696 "base_bdevs_list": [ 00:08:29.696 { 00:08:29.696 "name": "pt1", 00:08:29.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.696 "is_configured": true, 00:08:29.696 "data_offset": 2048, 00:08:29.696 "data_size": 63488 00:08:29.696 }, 00:08:29.696 { 00:08:29.696 "name": "pt2", 00:08:29.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.696 "is_configured": true, 00:08:29.696 "data_offset": 2048, 00:08:29.696 "data_size": 63488 00:08:29.696 }, 00:08:29.696 { 00:08:29.696 "name": "pt3", 00:08:29.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.696 "is_configured": true, 00:08:29.696 "data_offset": 2048, 00:08:29.696 "data_size": 63488 00:08:29.696 } 00:08:29.696 ] 00:08:29.696 } 00:08:29.696 } 00:08:29.696 }' 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:29.696 pt2 00:08:29.696 pt3' 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.696 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.956 [2024-10-09 01:28:28.697491] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0e945d63-1d5e-4758-9ca4-e902cc75e532 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0e945d63-1d5e-4758-9ca4-e902cc75e532 ']' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.956 [2024-10-09 01:28:28.741263] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.956 [2024-10-09 01:28:28.741338] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.956 [2024-10-09 01:28:28.741430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.956 [2024-10-09 01:28:28.741512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.956 [2024-10-09 01:28:28.741590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.956 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:30.216 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.217 [2024-10-09 01:28:28.873342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:30.217 [2024-10-09 01:28:28.875421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:30.217 [2024-10-09 01:28:28.875469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:30.217 [2024-10-09 01:28:28.875516] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:30.217 [2024-10-09 01:28:28.875572] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:30.217 [2024-10-09 01:28:28.875588] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:30.217 [2024-10-09 01:28:28.875602] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.217 [2024-10-09 01:28:28.875618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:30.217 request: 00:08:30.217 { 00:08:30.217 "name": "raid_bdev1", 00:08:30.217 "raid_level": "raid0", 00:08:30.217 "base_bdevs": [ 00:08:30.217 "malloc1", 00:08:30.217 "malloc2", 00:08:30.217 "malloc3" 00:08:30.217 ], 00:08:30.217 "strip_size_kb": 64, 00:08:30.217 "superblock": false, 00:08:30.217 "method": "bdev_raid_create", 00:08:30.217 "req_id": 1 00:08:30.217 } 00:08:30.217 Got JSON-RPC error response 00:08:30.217 response: 00:08:30.217 { 00:08:30.217 "code": -17, 00:08:30.217 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:30.217 } 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.217 [2024-10-09 01:28:28.929323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.217 [2024-10-09 01:28:28.929409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.217 [2024-10-09 01:28:28.929444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:30.217 [2024-10-09 01:28:28.929472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.217 [2024-10-09 01:28:28.931830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.217 [2024-10-09 01:28:28.931892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.217 [2024-10-09 01:28:28.931971] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:30.217 [2024-10-09 01:28:28.932031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:30.217 pt1 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.217 01:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.217 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.217 "name": "raid_bdev1", 00:08:30.217 "uuid": "0e945d63-1d5e-4758-9ca4-e902cc75e532", 00:08:30.217 "strip_size_kb": 64, 00:08:30.217 "state": "configuring", 00:08:30.217 "raid_level": "raid0", 00:08:30.217 "superblock": true, 00:08:30.217 "num_base_bdevs": 3, 00:08:30.217 "num_base_bdevs_discovered": 1, 00:08:30.217 "num_base_bdevs_operational": 3, 00:08:30.217 "base_bdevs_list": [ 00:08:30.217 { 00:08:30.217 "name": "pt1", 00:08:30.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.217 "is_configured": true, 00:08:30.217 "data_offset": 2048, 00:08:30.217 "data_size": 63488 00:08:30.217 }, 00:08:30.217 { 00:08:30.217 "name": null, 00:08:30.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.217 "is_configured": false, 00:08:30.217 "data_offset": 2048, 00:08:30.217 "data_size": 63488 00:08:30.217 }, 00:08:30.217 { 00:08:30.217 "name": null, 00:08:30.217 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:30.217 "is_configured": false, 00:08:30.217 "data_offset": 2048, 00:08:30.217 "data_size": 63488 00:08:30.217 } 00:08:30.217 ] 00:08:30.217 }' 00:08:30.217 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.217 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.477 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:30.477 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:30.477 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.477 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.737 [2024-10-09 01:28:29.369442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:30.737 [2024-10-09 01:28:29.369545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.737 [2024-10-09 01:28:29.369572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:30.737 [2024-10-09 01:28:29.369581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.737 [2024-10-09 01:28:29.369959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.737 [2024-10-09 01:28:29.369975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:30.737 [2024-10-09 01:28:29.370034] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:30.737 [2024-10-09 01:28:29.370052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:30.737 pt2 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.737 [2024-10-09 01:28:29.377475] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.737 "name": "raid_bdev1", 00:08:30.737 "uuid": "0e945d63-1d5e-4758-9ca4-e902cc75e532", 00:08:30.737 "strip_size_kb": 64, 00:08:30.737 "state": "configuring", 00:08:30.737 "raid_level": "raid0", 00:08:30.737 "superblock": true, 00:08:30.737 "num_base_bdevs": 3, 00:08:30.737 "num_base_bdevs_discovered": 1, 00:08:30.737 "num_base_bdevs_operational": 3, 00:08:30.737 "base_bdevs_list": [ 00:08:30.737 { 00:08:30.737 "name": "pt1", 00:08:30.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.737 "is_configured": true, 00:08:30.737 "data_offset": 2048, 00:08:30.737 "data_size": 63488 00:08:30.737 }, 00:08:30.737 { 00:08:30.737 "name": null, 00:08:30.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.737 "is_configured": false, 00:08:30.737 "data_offset": 0, 00:08:30.737 "data_size": 63488 00:08:30.737 }, 00:08:30.737 { 00:08:30.737 "name": null, 00:08:30.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:30.737 "is_configured": false, 00:08:30.737 "data_offset": 2048, 00:08:30.737 "data_size": 63488 00:08:30.737 } 00:08:30.737 ] 00:08:30.737 }' 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.737 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.998 [2024-10-09 01:28:29.781540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:30.998 [2024-10-09 01:28:29.781634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.998 [2024-10-09 01:28:29.781664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:30.998 [2024-10-09 01:28:29.781691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.998 [2024-10-09 01:28:29.782066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.998 [2024-10-09 01:28:29.782121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:30.998 [2024-10-09 01:28:29.782198] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:30.998 [2024-10-09 01:28:29.782261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:30.998 pt2 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.998 [2024-10-09 01:28:29.793557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:30.998 [2024-10-09 01:28:29.793650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.998 [2024-10-09 01:28:29.793677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:30.998 [2024-10-09 01:28:29.793703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.998 [2024-10-09 01:28:29.794042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.998 [2024-10-09 01:28:29.794096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:30.998 [2024-10-09 01:28:29.794167] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:30.998 [2024-10-09 01:28:29.794211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:30.998 [2024-10-09 01:28:29.794320] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:30.998 [2024-10-09 01:28:29.794337] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:30.998 [2024-10-09 01:28:29.794584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:30.998 [2024-10-09 01:28:29.794691] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:30.998 [2024-10-09 01:28:29.794699] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:30.998 [2024-10-09 01:28:29.794791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.998 pt3 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.998 "name": "raid_bdev1", 00:08:30.998 "uuid": "0e945d63-1d5e-4758-9ca4-e902cc75e532", 00:08:30.998 "strip_size_kb": 64, 00:08:30.998 "state": "online", 00:08:30.998 "raid_level": "raid0", 00:08:30.998 "superblock": true, 00:08:30.998 "num_base_bdevs": 3, 00:08:30.998 "num_base_bdevs_discovered": 3, 00:08:30.998 "num_base_bdevs_operational": 3, 00:08:30.998 "base_bdevs_list": [ 00:08:30.998 { 00:08:30.998 "name": "pt1", 00:08:30.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.998 "is_configured": true, 00:08:30.998 "data_offset": 2048, 00:08:30.998 "data_size": 63488 00:08:30.998 }, 00:08:30.998 { 00:08:30.998 "name": "pt2", 00:08:30.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.998 "is_configured": true, 00:08:30.998 "data_offset": 2048, 00:08:30.998 "data_size": 63488 00:08:30.998 }, 00:08:30.998 { 00:08:30.998 "name": "pt3", 00:08:30.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:30.998 "is_configured": true, 00:08:30.998 "data_offset": 2048, 00:08:30.998 "data_size": 63488 00:08:30.998 } 00:08:30.998 ] 00:08:30.998 }' 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.998 01:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.568 [2024-10-09 01:28:30.230054] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.568 "name": "raid_bdev1", 00:08:31.568 "aliases": [ 00:08:31.568 "0e945d63-1d5e-4758-9ca4-e902cc75e532" 00:08:31.568 ], 00:08:31.568 "product_name": "Raid Volume", 00:08:31.568 "block_size": 512, 00:08:31.568 "num_blocks": 190464, 00:08:31.568 "uuid": "0e945d63-1d5e-4758-9ca4-e902cc75e532", 00:08:31.568 "assigned_rate_limits": { 00:08:31.568 "rw_ios_per_sec": 0, 00:08:31.568 "rw_mbytes_per_sec": 0, 00:08:31.568 "r_mbytes_per_sec": 0, 00:08:31.568 "w_mbytes_per_sec": 0 00:08:31.568 }, 00:08:31.568 "claimed": false, 00:08:31.568 "zoned": false, 00:08:31.568 "supported_io_types": { 00:08:31.568 "read": true, 00:08:31.568 "write": true, 00:08:31.568 "unmap": true, 00:08:31.568 "flush": true, 00:08:31.568 "reset": true, 00:08:31.568 "nvme_admin": false, 00:08:31.568 "nvme_io": false, 00:08:31.568 "nvme_io_md": false, 00:08:31.568 "write_zeroes": true, 00:08:31.568 "zcopy": false, 00:08:31.568 "get_zone_info": false, 00:08:31.568 "zone_management": false, 00:08:31.568 "zone_append": false, 00:08:31.568 "compare": false, 00:08:31.568 "compare_and_write": false, 00:08:31.568 "abort": false, 00:08:31.568 "seek_hole": false, 00:08:31.568 "seek_data": false, 00:08:31.568 "copy": false, 00:08:31.568 "nvme_iov_md": false 00:08:31.568 }, 00:08:31.568 "memory_domains": [ 00:08:31.568 { 00:08:31.568 "dma_device_id": "system", 00:08:31.568 "dma_device_type": 1 00:08:31.568 }, 00:08:31.568 { 00:08:31.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.568 "dma_device_type": 2 00:08:31.568 }, 00:08:31.568 { 00:08:31.568 "dma_device_id": "system", 00:08:31.568 "dma_device_type": 1 00:08:31.568 }, 00:08:31.568 { 00:08:31.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.568 "dma_device_type": 2 00:08:31.568 }, 00:08:31.568 { 00:08:31.568 "dma_device_id": "system", 00:08:31.568 "dma_device_type": 1 00:08:31.568 }, 00:08:31.568 { 00:08:31.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.568 "dma_device_type": 2 00:08:31.568 } 00:08:31.568 ], 00:08:31.568 "driver_specific": { 00:08:31.568 "raid": { 00:08:31.568 "uuid": "0e945d63-1d5e-4758-9ca4-e902cc75e532", 00:08:31.568 "strip_size_kb": 64, 00:08:31.568 "state": "online", 00:08:31.568 "raid_level": "raid0", 00:08:31.568 "superblock": true, 00:08:31.568 "num_base_bdevs": 3, 00:08:31.568 "num_base_bdevs_discovered": 3, 00:08:31.568 "num_base_bdevs_operational": 3, 00:08:31.568 "base_bdevs_list": [ 00:08:31.568 { 00:08:31.568 "name": "pt1", 00:08:31.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.568 "is_configured": true, 00:08:31.568 "data_offset": 2048, 00:08:31.568 "data_size": 63488 00:08:31.568 }, 00:08:31.568 { 00:08:31.568 "name": "pt2", 00:08:31.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.568 "is_configured": true, 00:08:31.568 "data_offset": 2048, 00:08:31.568 "data_size": 63488 00:08:31.568 }, 00:08:31.568 { 00:08:31.568 "name": "pt3", 00:08:31.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.568 "is_configured": true, 00:08:31.568 "data_offset": 2048, 00:08:31.568 "data_size": 63488 00:08:31.568 } 00:08:31.568 ] 00:08:31.568 } 00:08:31.568 } 00:08:31.568 }' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:31.568 pt2 00:08:31.568 pt3' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.568 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.828 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.829 [2024-10-09 01:28:30.538034] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0e945d63-1d5e-4758-9ca4-e902cc75e532 '!=' 0e945d63-1d5e-4758-9ca4-e902cc75e532 ']' 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77352 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77352 ']' 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77352 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77352 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77352' 00:08:31.829 killing process with pid 77352 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77352 00:08:31.829 [2024-10-09 01:28:30.607770] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.829 [2024-10-09 01:28:30.607888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.829 01:28:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77352 00:08:31.829 [2024-10-09 01:28:30.607951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.829 [2024-10-09 01:28:30.607965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:31.829 [2024-10-09 01:28:30.667480] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.398 01:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:32.398 00:08:32.398 real 0m4.144s 00:08:32.398 user 0m6.305s 00:08:32.398 sys 0m0.958s 00:08:32.398 01:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.398 ************************************ 00:08:32.398 END TEST raid_superblock_test 00:08:32.398 ************************************ 00:08:32.398 01:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.398 01:28:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:32.398 01:28:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:32.398 01:28:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.398 01:28:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.398 ************************************ 00:08:32.398 START TEST raid_read_error_test 00:08:32.398 ************************************ 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.U5RikoqLPw 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77593 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77593 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 77593 ']' 00:08:32.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.398 01:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.398 [2024-10-09 01:28:31.212071] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:32.398 [2024-10-09 01:28:31.212190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77593 ] 00:08:32.658 [2024-10-09 01:28:31.347340] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.658 [2024-10-09 01:28:31.377052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.658 [2024-10-09 01:28:31.446280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.658 [2024-10-09 01:28:31.522188] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.658 [2024-10-09 01:28:31.522235] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.227 BaseBdev1_malloc 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.227 true 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.227 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.227 [2024-10-09 01:28:32.060983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:33.227 [2024-10-09 01:28:32.061052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.227 [2024-10-09 01:28:32.061071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:33.227 [2024-10-09 01:28:32.061087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.227 [2024-10-09 01:28:32.063468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.227 [2024-10-09 01:28:32.063507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:33.227 BaseBdev1 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.228 BaseBdev2_malloc 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.228 true 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.228 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.228 [2024-10-09 01:28:32.113374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:33.228 [2024-10-09 01:28:32.113427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.228 [2024-10-09 01:28:32.113442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:33.228 [2024-10-09 01:28:32.113453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.228 [2024-10-09 01:28:32.115873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.228 [2024-10-09 01:28:32.115983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:33.487 BaseBdev2 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.487 BaseBdev3_malloc 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.487 true 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.487 [2024-10-09 01:28:32.159852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:33.487 [2024-10-09 01:28:32.159967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.487 [2024-10-09 01:28:32.159999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:33.487 [2024-10-09 01:28:32.160031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.487 [2024-10-09 01:28:32.162381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.487 [2024-10-09 01:28:32.162455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:33.487 BaseBdev3 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.487 [2024-10-09 01:28:32.171933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.487 [2024-10-09 01:28:32.173992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.487 [2024-10-09 01:28:32.174110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.487 [2024-10-09 01:28:32.174307] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:33.487 [2024-10-09 01:28:32.174351] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:33.487 [2024-10-09 01:28:32.174626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:33.487 [2024-10-09 01:28:32.174799] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:33.487 [2024-10-09 01:28:32.174845] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:33.487 [2024-10-09 01:28:32.175016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.487 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.488 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.488 "name": "raid_bdev1", 00:08:33.488 "uuid": "1495848c-068f-4cc4-92d1-d170a4c6ca49", 00:08:33.488 "strip_size_kb": 64, 00:08:33.488 "state": "online", 00:08:33.488 "raid_level": "raid0", 00:08:33.488 "superblock": true, 00:08:33.488 "num_base_bdevs": 3, 00:08:33.488 "num_base_bdevs_discovered": 3, 00:08:33.488 "num_base_bdevs_operational": 3, 00:08:33.488 "base_bdevs_list": [ 00:08:33.488 { 00:08:33.488 "name": "BaseBdev1", 00:08:33.488 "uuid": "5e713c26-fb67-5fa1-a6a4-aa56a08d01f2", 00:08:33.488 "is_configured": true, 00:08:33.488 "data_offset": 2048, 00:08:33.488 "data_size": 63488 00:08:33.488 }, 00:08:33.488 { 00:08:33.488 "name": "BaseBdev2", 00:08:33.488 "uuid": "fa005596-7d92-50b1-ba14-865b9502d8fb", 00:08:33.488 "is_configured": true, 00:08:33.488 "data_offset": 2048, 00:08:33.488 "data_size": 63488 00:08:33.488 }, 00:08:33.488 { 00:08:33.488 "name": "BaseBdev3", 00:08:33.488 "uuid": "f6c17bb5-3405-5168-989b-fb7b773b0839", 00:08:33.488 "is_configured": true, 00:08:33.488 "data_offset": 2048, 00:08:33.488 "data_size": 63488 00:08:33.488 } 00:08:33.488 ] 00:08:33.488 }' 00:08:33.488 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.488 01:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.747 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:33.747 01:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:34.007 [2024-10-09 01:28:32.668533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.946 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.947 "name": "raid_bdev1", 00:08:34.947 "uuid": "1495848c-068f-4cc4-92d1-d170a4c6ca49", 00:08:34.947 "strip_size_kb": 64, 00:08:34.947 "state": "online", 00:08:34.947 "raid_level": "raid0", 00:08:34.947 "superblock": true, 00:08:34.947 "num_base_bdevs": 3, 00:08:34.947 "num_base_bdevs_discovered": 3, 00:08:34.947 "num_base_bdevs_operational": 3, 00:08:34.947 "base_bdevs_list": [ 00:08:34.947 { 00:08:34.947 "name": "BaseBdev1", 00:08:34.947 "uuid": "5e713c26-fb67-5fa1-a6a4-aa56a08d01f2", 00:08:34.947 "is_configured": true, 00:08:34.947 "data_offset": 2048, 00:08:34.947 "data_size": 63488 00:08:34.947 }, 00:08:34.947 { 00:08:34.947 "name": "BaseBdev2", 00:08:34.947 "uuid": "fa005596-7d92-50b1-ba14-865b9502d8fb", 00:08:34.947 "is_configured": true, 00:08:34.947 "data_offset": 2048, 00:08:34.947 "data_size": 63488 00:08:34.947 }, 00:08:34.947 { 00:08:34.947 "name": "BaseBdev3", 00:08:34.947 "uuid": "f6c17bb5-3405-5168-989b-fb7b773b0839", 00:08:34.947 "is_configured": true, 00:08:34.947 "data_offset": 2048, 00:08:34.947 "data_size": 63488 00:08:34.947 } 00:08:34.947 ] 00:08:34.947 }' 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.947 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.207 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.207 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.207 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.207 [2024-10-09 01:28:33.987414] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.207 [2024-10-09 01:28:33.987571] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.207 [2024-10-09 01:28:33.990029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.207 [2024-10-09 01:28:33.990118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.207 [2024-10-09 01:28:33.990179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.207 [2024-10-09 01:28:33.990218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:35.207 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.207 { 00:08:35.207 "results": [ 00:08:35.207 { 00:08:35.207 "job": "raid_bdev1", 00:08:35.207 "core_mask": "0x1", 00:08:35.207 "workload": "randrw", 00:08:35.207 "percentage": 50, 00:08:35.207 "status": "finished", 00:08:35.207 "queue_depth": 1, 00:08:35.207 "io_size": 131072, 00:08:35.207 "runtime": 1.316906, 00:08:35.207 "iops": 15372.395599989673, 00:08:35.207 "mibps": 1921.5494499987092, 00:08:35.207 "io_failed": 1, 00:08:35.207 "io_timeout": 0, 00:08:35.207 "avg_latency_us": 91.30178980460681, 00:08:35.207 "min_latency_us": 24.321450361718817, 00:08:35.207 "max_latency_us": 1328.085069293123 00:08:35.207 } 00:08:35.207 ], 00:08:35.207 "core_count": 1 00:08:35.207 } 00:08:35.207 01:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77593 00:08:35.207 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 77593 ']' 00:08:35.207 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 77593 00:08:35.207 01:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:35.207 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.207 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77593 00:08:35.207 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:35.207 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:35.207 killing process with pid 77593 00:08:35.207 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77593' 00:08:35.207 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 77593 00:08:35.207 [2024-10-09 01:28:34.025924] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.207 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 77593 00:08:35.207 [2024-10-09 01:28:34.071495] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.U5RikoqLPw 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:08:35.777 ************************************ 00:08:35.777 END TEST raid_read_error_test 00:08:35.777 ************************************ 00:08:35.777 00:08:35.777 real 0m3.342s 00:08:35.777 user 0m4.031s 00:08:35.777 sys 0m0.607s 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.777 01:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.777 01:28:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:35.777 01:28:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:35.777 01:28:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.777 01:28:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.777 ************************************ 00:08:35.777 START TEST raid_write_error_test 00:08:35.777 ************************************ 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CASR5RU0xy 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77723 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77723 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 77723 ']' 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.777 01:28:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.777 [2024-10-09 01:28:34.622958] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:35.777 [2024-10-09 01:28:34.623169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77723 ] 00:08:36.037 [2024-10-09 01:28:34.754469] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.037 [2024-10-09 01:28:34.771808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.037 [2024-10-09 01:28:34.839363] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.037 [2024-10-09 01:28:34.915281] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.037 [2024-10-09 01:28:34.915331] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.625 BaseBdev1_malloc 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.625 true 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.625 [2024-10-09 01:28:35.482589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.625 [2024-10-09 01:28:35.482656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.625 [2024-10-09 01:28:35.482678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.625 [2024-10-09 01:28:35.482694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.625 [2024-10-09 01:28:35.485057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.625 [2024-10-09 01:28:35.485101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.625 BaseBdev1 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.625 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.885 BaseBdev2_malloc 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.885 true 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.885 [2024-10-09 01:28:35.539456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.885 [2024-10-09 01:28:35.539594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.885 [2024-10-09 01:28:35.539626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.885 [2024-10-09 01:28:35.539656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.885 [2024-10-09 01:28:35.541951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.885 [2024-10-09 01:28:35.542027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.885 BaseBdev2 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.885 BaseBdev3_malloc 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.885 true 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.885 [2024-10-09 01:28:35.585978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:36.885 [2024-10-09 01:28:35.586027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.885 [2024-10-09 01:28:35.586044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:36.885 [2024-10-09 01:28:35.586055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.885 [2024-10-09 01:28:35.588319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.885 [2024-10-09 01:28:35.588433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:36.885 BaseBdev3 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.885 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.885 [2024-10-09 01:28:35.598058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.886 [2024-10-09 01:28:35.600369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.886 [2024-10-09 01:28:35.600444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.886 [2024-10-09 01:28:35.600630] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:36.886 [2024-10-09 01:28:35.600648] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:36.886 [2024-10-09 01:28:35.600899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:36.886 [2024-10-09 01:28:35.601032] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:36.886 [2024-10-09 01:28:35.601045] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:36.886 [2024-10-09 01:28:35.601164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.886 "name": "raid_bdev1", 00:08:36.886 "uuid": "b3aaaa69-1e8c-446e-af71-b9bc191606b0", 00:08:36.886 "strip_size_kb": 64, 00:08:36.886 "state": "online", 00:08:36.886 "raid_level": "raid0", 00:08:36.886 "superblock": true, 00:08:36.886 "num_base_bdevs": 3, 00:08:36.886 "num_base_bdevs_discovered": 3, 00:08:36.886 "num_base_bdevs_operational": 3, 00:08:36.886 "base_bdevs_list": [ 00:08:36.886 { 00:08:36.886 "name": "BaseBdev1", 00:08:36.886 "uuid": "294936c5-96c0-5df8-b512-d159cd16e283", 00:08:36.886 "is_configured": true, 00:08:36.886 "data_offset": 2048, 00:08:36.886 "data_size": 63488 00:08:36.886 }, 00:08:36.886 { 00:08:36.886 "name": "BaseBdev2", 00:08:36.886 "uuid": "9ab4fcd4-88c8-5847-bc41-109d91a641a8", 00:08:36.886 "is_configured": true, 00:08:36.886 "data_offset": 2048, 00:08:36.886 "data_size": 63488 00:08:36.886 }, 00:08:36.886 { 00:08:36.886 "name": "BaseBdev3", 00:08:36.886 "uuid": "2638325d-934f-5917-9e60-3e7c522a7ca7", 00:08:36.886 "is_configured": true, 00:08:36.886 "data_offset": 2048, 00:08:36.886 "data_size": 63488 00:08:36.886 } 00:08:36.886 ] 00:08:36.886 }' 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.886 01:28:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.146 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:37.146 01:28:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:37.405 [2024-10-09 01:28:36.090650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.344 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.344 "name": "raid_bdev1", 00:08:38.344 "uuid": "b3aaaa69-1e8c-446e-af71-b9bc191606b0", 00:08:38.344 "strip_size_kb": 64, 00:08:38.344 "state": "online", 00:08:38.344 "raid_level": "raid0", 00:08:38.344 "superblock": true, 00:08:38.344 "num_base_bdevs": 3, 00:08:38.344 "num_base_bdevs_discovered": 3, 00:08:38.344 "num_base_bdevs_operational": 3, 00:08:38.344 "base_bdevs_list": [ 00:08:38.344 { 00:08:38.344 "name": "BaseBdev1", 00:08:38.344 "uuid": "294936c5-96c0-5df8-b512-d159cd16e283", 00:08:38.344 "is_configured": true, 00:08:38.344 "data_offset": 2048, 00:08:38.344 "data_size": 63488 00:08:38.344 }, 00:08:38.344 { 00:08:38.344 "name": "BaseBdev2", 00:08:38.344 "uuid": "9ab4fcd4-88c8-5847-bc41-109d91a641a8", 00:08:38.344 "is_configured": true, 00:08:38.344 "data_offset": 2048, 00:08:38.344 "data_size": 63488 00:08:38.344 }, 00:08:38.344 { 00:08:38.344 "name": "BaseBdev3", 00:08:38.344 "uuid": "2638325d-934f-5917-9e60-3e7c522a7ca7", 00:08:38.345 "is_configured": true, 00:08:38.345 "data_offset": 2048, 00:08:38.345 "data_size": 63488 00:08:38.345 } 00:08:38.345 ] 00:08:38.345 }' 00:08:38.345 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.345 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.604 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.604 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.604 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.604 [2024-10-09 01:28:37.489979] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.604 [2024-10-09 01:28:37.490125] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.604 [2024-10-09 01:28:37.492672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.604 [2024-10-09 01:28:37.492770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.604 [2024-10-09 01:28:37.492831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.604 [2024-10-09 01:28:37.492872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:38.604 { 00:08:38.604 "results": [ 00:08:38.604 { 00:08:38.604 "job": "raid_bdev1", 00:08:38.604 "core_mask": "0x1", 00:08:38.604 "workload": "randrw", 00:08:38.604 "percentage": 50, 00:08:38.604 "status": "finished", 00:08:38.604 "queue_depth": 1, 00:08:38.604 "io_size": 131072, 00:08:38.604 "runtime": 1.397223, 00:08:38.604 "iops": 15474.98144533836, 00:08:38.604 "mibps": 1934.372680667295, 00:08:38.604 "io_failed": 1, 00:08:38.604 "io_timeout": 0, 00:08:38.604 "avg_latency_us": 90.70878236349877, 00:08:38.604 "min_latency_us": 24.321450361718817, 00:08:38.604 "max_latency_us": 1335.2253116011505 00:08:38.604 } 00:08:38.604 ], 00:08:38.604 "core_count": 1 00:08:38.604 } 00:08:38.604 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.604 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77723 00:08:38.604 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 77723 ']' 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 77723 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77723 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.864 killing process with pid 77723 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77723' 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 77723 00:08:38.864 [2024-10-09 01:28:37.546145] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.864 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 77723 00:08:38.864 [2024-10-09 01:28:37.591040] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CASR5RU0xy 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:39.124 00:08:39.124 real 0m3.450s 00:08:39.124 user 0m4.172s 00:08:39.124 sys 0m0.660s 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.124 ************************************ 00:08:39.124 END TEST raid_write_error_test 00:08:39.124 ************************************ 00:08:39.124 01:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.384 01:28:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:39.384 01:28:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:39.384 01:28:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:39.384 01:28:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.384 01:28:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.384 ************************************ 00:08:39.384 START TEST raid_state_function_test 00:08:39.384 ************************************ 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:39.384 Process raid pid: 77856 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77856 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77856' 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77856 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 77856 ']' 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.384 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.384 [2024-10-09 01:28:38.135737] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:39.384 [2024-10-09 01:28:38.135876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.644 [2024-10-09 01:28:38.287465] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.644 [2024-10-09 01:28:38.313945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.644 [2024-10-09 01:28:38.381874] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.644 [2024-10-09 01:28:38.457610] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.644 [2024-10-09 01:28:38.457748] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.213 [2024-10-09 01:28:38.957896] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.213 [2024-10-09 01:28:38.958020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.213 [2024-10-09 01:28:38.958041] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.213 [2024-10-09 01:28:38.958049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.213 [2024-10-09 01:28:38.958060] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.213 [2024-10-09 01:28:38.958067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.213 01:28:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.213 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.213 "name": "Existed_Raid", 00:08:40.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.213 "strip_size_kb": 64, 00:08:40.213 "state": "configuring", 00:08:40.213 "raid_level": "concat", 00:08:40.213 "superblock": false, 00:08:40.213 "num_base_bdevs": 3, 00:08:40.213 "num_base_bdevs_discovered": 0, 00:08:40.213 "num_base_bdevs_operational": 3, 00:08:40.213 "base_bdevs_list": [ 00:08:40.213 { 00:08:40.213 "name": "BaseBdev1", 00:08:40.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.213 "is_configured": false, 00:08:40.213 "data_offset": 0, 00:08:40.213 "data_size": 0 00:08:40.213 }, 00:08:40.213 { 00:08:40.213 "name": "BaseBdev2", 00:08:40.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.213 "is_configured": false, 00:08:40.213 "data_offset": 0, 00:08:40.213 "data_size": 0 00:08:40.213 }, 00:08:40.213 { 00:08:40.213 "name": "BaseBdev3", 00:08:40.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.213 "is_configured": false, 00:08:40.213 "data_offset": 0, 00:08:40.213 "data_size": 0 00:08:40.213 } 00:08:40.213 ] 00:08:40.213 }' 00:08:40.213 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.214 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 [2024-10-09 01:28:39.385900] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.783 [2024-10-09 01:28:39.385987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 [2024-10-09 01:28:39.397913] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.783 [2024-10-09 01:28:39.397987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.783 [2024-10-09 01:28:39.398015] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.783 [2024-10-09 01:28:39.398036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.783 [2024-10-09 01:28:39.398057] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.783 [2024-10-09 01:28:39.398075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 [2024-10-09 01:28:39.424766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.783 BaseBdev1 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.783 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.783 [ 00:08:40.783 { 00:08:40.783 "name": "BaseBdev1", 00:08:40.783 "aliases": [ 00:08:40.783 "1e8c26b7-0859-41c4-b164-64b7f207e5a4" 00:08:40.783 ], 00:08:40.783 "product_name": "Malloc disk", 00:08:40.783 "block_size": 512, 00:08:40.783 "num_blocks": 65536, 00:08:40.783 "uuid": "1e8c26b7-0859-41c4-b164-64b7f207e5a4", 00:08:40.783 "assigned_rate_limits": { 00:08:40.783 "rw_ios_per_sec": 0, 00:08:40.783 "rw_mbytes_per_sec": 0, 00:08:40.783 "r_mbytes_per_sec": 0, 00:08:40.783 "w_mbytes_per_sec": 0 00:08:40.783 }, 00:08:40.783 "claimed": true, 00:08:40.783 "claim_type": "exclusive_write", 00:08:40.783 "zoned": false, 00:08:40.783 "supported_io_types": { 00:08:40.783 "read": true, 00:08:40.783 "write": true, 00:08:40.783 "unmap": true, 00:08:40.783 "flush": true, 00:08:40.784 "reset": true, 00:08:40.784 "nvme_admin": false, 00:08:40.784 "nvme_io": false, 00:08:40.784 "nvme_io_md": false, 00:08:40.784 "write_zeroes": true, 00:08:40.784 "zcopy": true, 00:08:40.784 "get_zone_info": false, 00:08:40.784 "zone_management": false, 00:08:40.784 "zone_append": false, 00:08:40.784 "compare": false, 00:08:40.784 "compare_and_write": false, 00:08:40.784 "abort": true, 00:08:40.784 "seek_hole": false, 00:08:40.784 "seek_data": false, 00:08:40.784 "copy": true, 00:08:40.784 "nvme_iov_md": false 00:08:40.784 }, 00:08:40.784 "memory_domains": [ 00:08:40.784 { 00:08:40.784 "dma_device_id": "system", 00:08:40.784 "dma_device_type": 1 00:08:40.784 }, 00:08:40.784 { 00:08:40.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.784 "dma_device_type": 2 00:08:40.784 } 00:08:40.784 ], 00:08:40.784 "driver_specific": {} 00:08:40.784 } 00:08:40.784 ] 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.784 "name": "Existed_Raid", 00:08:40.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.784 "strip_size_kb": 64, 00:08:40.784 "state": "configuring", 00:08:40.784 "raid_level": "concat", 00:08:40.784 "superblock": false, 00:08:40.784 "num_base_bdevs": 3, 00:08:40.784 "num_base_bdevs_discovered": 1, 00:08:40.784 "num_base_bdevs_operational": 3, 00:08:40.784 "base_bdevs_list": [ 00:08:40.784 { 00:08:40.784 "name": "BaseBdev1", 00:08:40.784 "uuid": "1e8c26b7-0859-41c4-b164-64b7f207e5a4", 00:08:40.784 "is_configured": true, 00:08:40.784 "data_offset": 0, 00:08:40.784 "data_size": 65536 00:08:40.784 }, 00:08:40.784 { 00:08:40.784 "name": "BaseBdev2", 00:08:40.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.784 "is_configured": false, 00:08:40.784 "data_offset": 0, 00:08:40.784 "data_size": 0 00:08:40.784 }, 00:08:40.784 { 00:08:40.784 "name": "BaseBdev3", 00:08:40.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.784 "is_configured": false, 00:08:40.784 "data_offset": 0, 00:08:40.784 "data_size": 0 00:08:40.784 } 00:08:40.784 ] 00:08:40.784 }' 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.784 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.044 [2024-10-09 01:28:39.872901] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.044 [2024-10-09 01:28:39.873006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.044 [2024-10-09 01:28:39.884932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.044 [2024-10-09 01:28:39.887080] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.044 [2024-10-09 01:28:39.887152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.044 [2024-10-09 01:28:39.887188] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:41.044 [2024-10-09 01:28:39.887209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.044 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.304 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.304 "name": "Existed_Raid", 00:08:41.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.304 "strip_size_kb": 64, 00:08:41.304 "state": "configuring", 00:08:41.304 "raid_level": "concat", 00:08:41.304 "superblock": false, 00:08:41.304 "num_base_bdevs": 3, 00:08:41.304 "num_base_bdevs_discovered": 1, 00:08:41.304 "num_base_bdevs_operational": 3, 00:08:41.304 "base_bdevs_list": [ 00:08:41.304 { 00:08:41.304 "name": "BaseBdev1", 00:08:41.304 "uuid": "1e8c26b7-0859-41c4-b164-64b7f207e5a4", 00:08:41.304 "is_configured": true, 00:08:41.304 "data_offset": 0, 00:08:41.304 "data_size": 65536 00:08:41.304 }, 00:08:41.304 { 00:08:41.304 "name": "BaseBdev2", 00:08:41.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.304 "is_configured": false, 00:08:41.304 "data_offset": 0, 00:08:41.304 "data_size": 0 00:08:41.304 }, 00:08:41.304 { 00:08:41.304 "name": "BaseBdev3", 00:08:41.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.304 "is_configured": false, 00:08:41.304 "data_offset": 0, 00:08:41.304 "data_size": 0 00:08:41.304 } 00:08:41.304 ] 00:08:41.304 }' 00:08:41.304 01:28:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.304 01:28:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.564 [2024-10-09 01:28:40.346584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.564 BaseBdev2 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.564 [ 00:08:41.564 { 00:08:41.564 "name": "BaseBdev2", 00:08:41.564 "aliases": [ 00:08:41.564 "849927b7-7f59-419b-9336-4244d1422239" 00:08:41.564 ], 00:08:41.564 "product_name": "Malloc disk", 00:08:41.564 "block_size": 512, 00:08:41.564 "num_blocks": 65536, 00:08:41.564 "uuid": "849927b7-7f59-419b-9336-4244d1422239", 00:08:41.564 "assigned_rate_limits": { 00:08:41.564 "rw_ios_per_sec": 0, 00:08:41.564 "rw_mbytes_per_sec": 0, 00:08:41.564 "r_mbytes_per_sec": 0, 00:08:41.564 "w_mbytes_per_sec": 0 00:08:41.564 }, 00:08:41.564 "claimed": true, 00:08:41.564 "claim_type": "exclusive_write", 00:08:41.564 "zoned": false, 00:08:41.564 "supported_io_types": { 00:08:41.564 "read": true, 00:08:41.564 "write": true, 00:08:41.564 "unmap": true, 00:08:41.564 "flush": true, 00:08:41.564 "reset": true, 00:08:41.564 "nvme_admin": false, 00:08:41.564 "nvme_io": false, 00:08:41.564 "nvme_io_md": false, 00:08:41.564 "write_zeroes": true, 00:08:41.564 "zcopy": true, 00:08:41.564 "get_zone_info": false, 00:08:41.564 "zone_management": false, 00:08:41.564 "zone_append": false, 00:08:41.564 "compare": false, 00:08:41.564 "compare_and_write": false, 00:08:41.564 "abort": true, 00:08:41.564 "seek_hole": false, 00:08:41.564 "seek_data": false, 00:08:41.564 "copy": true, 00:08:41.564 "nvme_iov_md": false 00:08:41.564 }, 00:08:41.564 "memory_domains": [ 00:08:41.564 { 00:08:41.564 "dma_device_id": "system", 00:08:41.564 "dma_device_type": 1 00:08:41.564 }, 00:08:41.564 { 00:08:41.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.564 "dma_device_type": 2 00:08:41.564 } 00:08:41.564 ], 00:08:41.564 "driver_specific": {} 00:08:41.564 } 00:08:41.564 ] 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.564 "name": "Existed_Raid", 00:08:41.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.564 "strip_size_kb": 64, 00:08:41.564 "state": "configuring", 00:08:41.564 "raid_level": "concat", 00:08:41.564 "superblock": false, 00:08:41.564 "num_base_bdevs": 3, 00:08:41.564 "num_base_bdevs_discovered": 2, 00:08:41.564 "num_base_bdevs_operational": 3, 00:08:41.564 "base_bdevs_list": [ 00:08:41.564 { 00:08:41.564 "name": "BaseBdev1", 00:08:41.564 "uuid": "1e8c26b7-0859-41c4-b164-64b7f207e5a4", 00:08:41.564 "is_configured": true, 00:08:41.564 "data_offset": 0, 00:08:41.564 "data_size": 65536 00:08:41.564 }, 00:08:41.564 { 00:08:41.564 "name": "BaseBdev2", 00:08:41.564 "uuid": "849927b7-7f59-419b-9336-4244d1422239", 00:08:41.564 "is_configured": true, 00:08:41.564 "data_offset": 0, 00:08:41.564 "data_size": 65536 00:08:41.564 }, 00:08:41.564 { 00:08:41.564 "name": "BaseBdev3", 00:08:41.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.564 "is_configured": false, 00:08:41.564 "data_offset": 0, 00:08:41.564 "data_size": 0 00:08:41.564 } 00:08:41.564 ] 00:08:41.564 }' 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.564 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.134 [2024-10-09 01:28:40.803257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.134 [2024-10-09 01:28:40.803307] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:42.134 [2024-10-09 01:28:40.803315] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:42.134 [2024-10-09 01:28:40.803711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:42.134 [2024-10-09 01:28:40.803863] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:42.134 [2024-10-09 01:28:40.803876] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:42.134 [2024-10-09 01:28:40.804089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.134 BaseBdev3 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.134 [ 00:08:42.134 { 00:08:42.134 "name": "BaseBdev3", 00:08:42.134 "aliases": [ 00:08:42.134 "e6223abf-3d95-45d2-a852-01f5445f2130" 00:08:42.134 ], 00:08:42.134 "product_name": "Malloc disk", 00:08:42.134 "block_size": 512, 00:08:42.134 "num_blocks": 65536, 00:08:42.134 "uuid": "e6223abf-3d95-45d2-a852-01f5445f2130", 00:08:42.134 "assigned_rate_limits": { 00:08:42.134 "rw_ios_per_sec": 0, 00:08:42.134 "rw_mbytes_per_sec": 0, 00:08:42.134 "r_mbytes_per_sec": 0, 00:08:42.134 "w_mbytes_per_sec": 0 00:08:42.134 }, 00:08:42.134 "claimed": true, 00:08:42.134 "claim_type": "exclusive_write", 00:08:42.134 "zoned": false, 00:08:42.134 "supported_io_types": { 00:08:42.134 "read": true, 00:08:42.134 "write": true, 00:08:42.134 "unmap": true, 00:08:42.134 "flush": true, 00:08:42.134 "reset": true, 00:08:42.134 "nvme_admin": false, 00:08:42.134 "nvme_io": false, 00:08:42.134 "nvme_io_md": false, 00:08:42.134 "write_zeroes": true, 00:08:42.134 "zcopy": true, 00:08:42.134 "get_zone_info": false, 00:08:42.134 "zone_management": false, 00:08:42.134 "zone_append": false, 00:08:42.134 "compare": false, 00:08:42.134 "compare_and_write": false, 00:08:42.134 "abort": true, 00:08:42.134 "seek_hole": false, 00:08:42.134 "seek_data": false, 00:08:42.134 "copy": true, 00:08:42.134 "nvme_iov_md": false 00:08:42.134 }, 00:08:42.134 "memory_domains": [ 00:08:42.134 { 00:08:42.134 "dma_device_id": "system", 00:08:42.134 "dma_device_type": 1 00:08:42.134 }, 00:08:42.134 { 00:08:42.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.134 "dma_device_type": 2 00:08:42.134 } 00:08:42.134 ], 00:08:42.134 "driver_specific": {} 00:08:42.134 } 00:08:42.134 ] 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.134 "name": "Existed_Raid", 00:08:42.134 "uuid": "85c7ab2e-8958-410e-9666-40a554cbf862", 00:08:42.134 "strip_size_kb": 64, 00:08:42.134 "state": "online", 00:08:42.134 "raid_level": "concat", 00:08:42.134 "superblock": false, 00:08:42.134 "num_base_bdevs": 3, 00:08:42.134 "num_base_bdevs_discovered": 3, 00:08:42.134 "num_base_bdevs_operational": 3, 00:08:42.134 "base_bdevs_list": [ 00:08:42.134 { 00:08:42.134 "name": "BaseBdev1", 00:08:42.134 "uuid": "1e8c26b7-0859-41c4-b164-64b7f207e5a4", 00:08:42.134 "is_configured": true, 00:08:42.134 "data_offset": 0, 00:08:42.134 "data_size": 65536 00:08:42.134 }, 00:08:42.134 { 00:08:42.134 "name": "BaseBdev2", 00:08:42.134 "uuid": "849927b7-7f59-419b-9336-4244d1422239", 00:08:42.134 "is_configured": true, 00:08:42.134 "data_offset": 0, 00:08:42.134 "data_size": 65536 00:08:42.134 }, 00:08:42.134 { 00:08:42.134 "name": "BaseBdev3", 00:08:42.134 "uuid": "e6223abf-3d95-45d2-a852-01f5445f2130", 00:08:42.134 "is_configured": true, 00:08:42.134 "data_offset": 0, 00:08:42.134 "data_size": 65536 00:08:42.134 } 00:08:42.134 ] 00:08:42.134 }' 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.134 01:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.394 [2024-10-09 01:28:41.199682] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.394 "name": "Existed_Raid", 00:08:42.394 "aliases": [ 00:08:42.394 "85c7ab2e-8958-410e-9666-40a554cbf862" 00:08:42.394 ], 00:08:42.394 "product_name": "Raid Volume", 00:08:42.394 "block_size": 512, 00:08:42.394 "num_blocks": 196608, 00:08:42.394 "uuid": "85c7ab2e-8958-410e-9666-40a554cbf862", 00:08:42.394 "assigned_rate_limits": { 00:08:42.394 "rw_ios_per_sec": 0, 00:08:42.394 "rw_mbytes_per_sec": 0, 00:08:42.394 "r_mbytes_per_sec": 0, 00:08:42.394 "w_mbytes_per_sec": 0 00:08:42.394 }, 00:08:42.394 "claimed": false, 00:08:42.394 "zoned": false, 00:08:42.394 "supported_io_types": { 00:08:42.394 "read": true, 00:08:42.394 "write": true, 00:08:42.394 "unmap": true, 00:08:42.394 "flush": true, 00:08:42.394 "reset": true, 00:08:42.394 "nvme_admin": false, 00:08:42.394 "nvme_io": false, 00:08:42.394 "nvme_io_md": false, 00:08:42.394 "write_zeroes": true, 00:08:42.394 "zcopy": false, 00:08:42.394 "get_zone_info": false, 00:08:42.394 "zone_management": false, 00:08:42.394 "zone_append": false, 00:08:42.394 "compare": false, 00:08:42.394 "compare_and_write": false, 00:08:42.394 "abort": false, 00:08:42.394 "seek_hole": false, 00:08:42.394 "seek_data": false, 00:08:42.394 "copy": false, 00:08:42.394 "nvme_iov_md": false 00:08:42.394 }, 00:08:42.394 "memory_domains": [ 00:08:42.394 { 00:08:42.394 "dma_device_id": "system", 00:08:42.394 "dma_device_type": 1 00:08:42.394 }, 00:08:42.394 { 00:08:42.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.394 "dma_device_type": 2 00:08:42.394 }, 00:08:42.394 { 00:08:42.394 "dma_device_id": "system", 00:08:42.394 "dma_device_type": 1 00:08:42.394 }, 00:08:42.394 { 00:08:42.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.394 "dma_device_type": 2 00:08:42.394 }, 00:08:42.394 { 00:08:42.394 "dma_device_id": "system", 00:08:42.394 "dma_device_type": 1 00:08:42.394 }, 00:08:42.394 { 00:08:42.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.394 "dma_device_type": 2 00:08:42.394 } 00:08:42.394 ], 00:08:42.394 "driver_specific": { 00:08:42.394 "raid": { 00:08:42.394 "uuid": "85c7ab2e-8958-410e-9666-40a554cbf862", 00:08:42.394 "strip_size_kb": 64, 00:08:42.394 "state": "online", 00:08:42.394 "raid_level": "concat", 00:08:42.394 "superblock": false, 00:08:42.394 "num_base_bdevs": 3, 00:08:42.394 "num_base_bdevs_discovered": 3, 00:08:42.394 "num_base_bdevs_operational": 3, 00:08:42.394 "base_bdevs_list": [ 00:08:42.394 { 00:08:42.394 "name": "BaseBdev1", 00:08:42.394 "uuid": "1e8c26b7-0859-41c4-b164-64b7f207e5a4", 00:08:42.394 "is_configured": true, 00:08:42.394 "data_offset": 0, 00:08:42.394 "data_size": 65536 00:08:42.394 }, 00:08:42.394 { 00:08:42.394 "name": "BaseBdev2", 00:08:42.394 "uuid": "849927b7-7f59-419b-9336-4244d1422239", 00:08:42.394 "is_configured": true, 00:08:42.394 "data_offset": 0, 00:08:42.394 "data_size": 65536 00:08:42.394 }, 00:08:42.394 { 00:08:42.394 "name": "BaseBdev3", 00:08:42.394 "uuid": "e6223abf-3d95-45d2-a852-01f5445f2130", 00:08:42.394 "is_configured": true, 00:08:42.394 "data_offset": 0, 00:08:42.394 "data_size": 65536 00:08:42.394 } 00:08:42.394 ] 00:08:42.394 } 00:08:42.394 } 00:08:42.394 }' 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:42.394 BaseBdev2 00:08:42.394 BaseBdev3' 00:08:42.394 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.655 [2024-10-09 01:28:41.427503] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.655 [2024-10-09 01:28:41.427543] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.655 [2024-10-09 01:28:41.427602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.655 "name": "Existed_Raid", 00:08:42.655 "uuid": "85c7ab2e-8958-410e-9666-40a554cbf862", 00:08:42.655 "strip_size_kb": 64, 00:08:42.655 "state": "offline", 00:08:42.655 "raid_level": "concat", 00:08:42.655 "superblock": false, 00:08:42.655 "num_base_bdevs": 3, 00:08:42.655 "num_base_bdevs_discovered": 2, 00:08:42.655 "num_base_bdevs_operational": 2, 00:08:42.655 "base_bdevs_list": [ 00:08:42.655 { 00:08:42.655 "name": null, 00:08:42.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.655 "is_configured": false, 00:08:42.655 "data_offset": 0, 00:08:42.655 "data_size": 65536 00:08:42.655 }, 00:08:42.655 { 00:08:42.655 "name": "BaseBdev2", 00:08:42.655 "uuid": "849927b7-7f59-419b-9336-4244d1422239", 00:08:42.655 "is_configured": true, 00:08:42.655 "data_offset": 0, 00:08:42.655 "data_size": 65536 00:08:42.655 }, 00:08:42.655 { 00:08:42.655 "name": "BaseBdev3", 00:08:42.655 "uuid": "e6223abf-3d95-45d2-a852-01f5445f2130", 00:08:42.655 "is_configured": true, 00:08:42.655 "data_offset": 0, 00:08:42.655 "data_size": 65536 00:08:42.655 } 00:08:42.655 ] 00:08:42.655 }' 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.655 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:42.915 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:42.915 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.915 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.915 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.915 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:42.915 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.175 [2024-10-09 01:28:41.815567] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:43.175 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 [2024-10-09 01:28:41.887361] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:43.176 [2024-10-09 01:28:41.887423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 BaseBdev2 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 [ 00:08:43.176 { 00:08:43.176 "name": "BaseBdev2", 00:08:43.176 "aliases": [ 00:08:43.176 "8661638b-4872-4b4d-aab7-0337911ab05d" 00:08:43.176 ], 00:08:43.176 "product_name": "Malloc disk", 00:08:43.176 "block_size": 512, 00:08:43.176 "num_blocks": 65536, 00:08:43.176 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:43.176 "assigned_rate_limits": { 00:08:43.176 "rw_ios_per_sec": 0, 00:08:43.176 "rw_mbytes_per_sec": 0, 00:08:43.176 "r_mbytes_per_sec": 0, 00:08:43.176 "w_mbytes_per_sec": 0 00:08:43.176 }, 00:08:43.176 "claimed": false, 00:08:43.176 "zoned": false, 00:08:43.176 "supported_io_types": { 00:08:43.176 "read": true, 00:08:43.176 "write": true, 00:08:43.176 "unmap": true, 00:08:43.176 "flush": true, 00:08:43.176 "reset": true, 00:08:43.176 "nvme_admin": false, 00:08:43.176 "nvme_io": false, 00:08:43.176 "nvme_io_md": false, 00:08:43.176 "write_zeroes": true, 00:08:43.176 "zcopy": true, 00:08:43.176 "get_zone_info": false, 00:08:43.176 "zone_management": false, 00:08:43.176 "zone_append": false, 00:08:43.176 "compare": false, 00:08:43.176 "compare_and_write": false, 00:08:43.176 "abort": true, 00:08:43.176 "seek_hole": false, 00:08:43.176 "seek_data": false, 00:08:43.176 "copy": true, 00:08:43.176 "nvme_iov_md": false 00:08:43.176 }, 00:08:43.176 "memory_domains": [ 00:08:43.176 { 00:08:43.176 "dma_device_id": "system", 00:08:43.176 "dma_device_type": 1 00:08:43.176 }, 00:08:43.176 { 00:08:43.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.176 "dma_device_type": 2 00:08:43.176 } 00:08:43.176 ], 00:08:43.176 "driver_specific": {} 00:08:43.176 } 00:08:43.176 ] 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 BaseBdev3 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.176 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 [ 00:08:43.176 { 00:08:43.176 "name": "BaseBdev3", 00:08:43.176 "aliases": [ 00:08:43.176 "0a73da23-86af-480d-8f29-7f7020471b45" 00:08:43.176 ], 00:08:43.176 "product_name": "Malloc disk", 00:08:43.176 "block_size": 512, 00:08:43.176 "num_blocks": 65536, 00:08:43.176 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:43.176 "assigned_rate_limits": { 00:08:43.176 "rw_ios_per_sec": 0, 00:08:43.176 "rw_mbytes_per_sec": 0, 00:08:43.176 "r_mbytes_per_sec": 0, 00:08:43.176 "w_mbytes_per_sec": 0 00:08:43.176 }, 00:08:43.176 "claimed": false, 00:08:43.176 "zoned": false, 00:08:43.176 "supported_io_types": { 00:08:43.436 "read": true, 00:08:43.436 "write": true, 00:08:43.436 "unmap": true, 00:08:43.436 "flush": true, 00:08:43.436 "reset": true, 00:08:43.436 "nvme_admin": false, 00:08:43.436 "nvme_io": false, 00:08:43.436 "nvme_io_md": false, 00:08:43.436 "write_zeroes": true, 00:08:43.436 "zcopy": true, 00:08:43.436 "get_zone_info": false, 00:08:43.436 "zone_management": false, 00:08:43.436 "zone_append": false, 00:08:43.436 "compare": false, 00:08:43.436 "compare_and_write": false, 00:08:43.436 "abort": true, 00:08:43.436 "seek_hole": false, 00:08:43.436 "seek_data": false, 00:08:43.436 "copy": true, 00:08:43.436 "nvme_iov_md": false 00:08:43.436 }, 00:08:43.436 "memory_domains": [ 00:08:43.436 { 00:08:43.436 "dma_device_id": "system", 00:08:43.436 "dma_device_type": 1 00:08:43.436 }, 00:08:43.436 { 00:08:43.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.436 "dma_device_type": 2 00:08:43.436 } 00:08:43.436 ], 00:08:43.436 "driver_specific": {} 00:08:43.436 } 00:08:43.436 ] 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.436 [2024-10-09 01:28:42.081736] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.436 [2024-10-09 01:28:42.081878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.436 [2024-10-09 01:28:42.081903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.436 [2024-10-09 01:28:42.083967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.436 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.437 "name": "Existed_Raid", 00:08:43.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.437 "strip_size_kb": 64, 00:08:43.437 "state": "configuring", 00:08:43.437 "raid_level": "concat", 00:08:43.437 "superblock": false, 00:08:43.437 "num_base_bdevs": 3, 00:08:43.437 "num_base_bdevs_discovered": 2, 00:08:43.437 "num_base_bdevs_operational": 3, 00:08:43.437 "base_bdevs_list": [ 00:08:43.437 { 00:08:43.437 "name": "BaseBdev1", 00:08:43.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.437 "is_configured": false, 00:08:43.437 "data_offset": 0, 00:08:43.437 "data_size": 0 00:08:43.437 }, 00:08:43.437 { 00:08:43.437 "name": "BaseBdev2", 00:08:43.437 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:43.437 "is_configured": true, 00:08:43.437 "data_offset": 0, 00:08:43.437 "data_size": 65536 00:08:43.437 }, 00:08:43.437 { 00:08:43.437 "name": "BaseBdev3", 00:08:43.437 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:43.437 "is_configured": true, 00:08:43.437 "data_offset": 0, 00:08:43.437 "data_size": 65536 00:08:43.437 } 00:08:43.437 ] 00:08:43.437 }' 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.437 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.696 [2024-10-09 01:28:42.433784] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.696 "name": "Existed_Raid", 00:08:43.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.696 "strip_size_kb": 64, 00:08:43.696 "state": "configuring", 00:08:43.696 "raid_level": "concat", 00:08:43.696 "superblock": false, 00:08:43.696 "num_base_bdevs": 3, 00:08:43.696 "num_base_bdevs_discovered": 1, 00:08:43.696 "num_base_bdevs_operational": 3, 00:08:43.696 "base_bdevs_list": [ 00:08:43.696 { 00:08:43.696 "name": "BaseBdev1", 00:08:43.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.696 "is_configured": false, 00:08:43.696 "data_offset": 0, 00:08:43.696 "data_size": 0 00:08:43.696 }, 00:08:43.696 { 00:08:43.696 "name": null, 00:08:43.696 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:43.696 "is_configured": false, 00:08:43.696 "data_offset": 0, 00:08:43.696 "data_size": 65536 00:08:43.696 }, 00:08:43.696 { 00:08:43.696 "name": "BaseBdev3", 00:08:43.696 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:43.696 "is_configured": true, 00:08:43.696 "data_offset": 0, 00:08:43.696 "data_size": 65536 00:08:43.696 } 00:08:43.696 ] 00:08:43.696 }' 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.696 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.264 [2024-10-09 01:28:42.934663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.264 BaseBdev1 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.264 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.265 [ 00:08:44.265 { 00:08:44.265 "name": "BaseBdev1", 00:08:44.265 "aliases": [ 00:08:44.265 "c07b9781-29fd-4fe7-b940-98df70d8aa12" 00:08:44.265 ], 00:08:44.265 "product_name": "Malloc disk", 00:08:44.265 "block_size": 512, 00:08:44.265 "num_blocks": 65536, 00:08:44.265 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:44.265 "assigned_rate_limits": { 00:08:44.265 "rw_ios_per_sec": 0, 00:08:44.265 "rw_mbytes_per_sec": 0, 00:08:44.265 "r_mbytes_per_sec": 0, 00:08:44.265 "w_mbytes_per_sec": 0 00:08:44.265 }, 00:08:44.265 "claimed": true, 00:08:44.265 "claim_type": "exclusive_write", 00:08:44.265 "zoned": false, 00:08:44.265 "supported_io_types": { 00:08:44.265 "read": true, 00:08:44.265 "write": true, 00:08:44.265 "unmap": true, 00:08:44.265 "flush": true, 00:08:44.265 "reset": true, 00:08:44.265 "nvme_admin": false, 00:08:44.265 "nvme_io": false, 00:08:44.265 "nvme_io_md": false, 00:08:44.265 "write_zeroes": true, 00:08:44.265 "zcopy": true, 00:08:44.265 "get_zone_info": false, 00:08:44.265 "zone_management": false, 00:08:44.265 "zone_append": false, 00:08:44.265 "compare": false, 00:08:44.265 "compare_and_write": false, 00:08:44.265 "abort": true, 00:08:44.265 "seek_hole": false, 00:08:44.265 "seek_data": false, 00:08:44.265 "copy": true, 00:08:44.265 "nvme_iov_md": false 00:08:44.265 }, 00:08:44.265 "memory_domains": [ 00:08:44.265 { 00:08:44.265 "dma_device_id": "system", 00:08:44.265 "dma_device_type": 1 00:08:44.265 }, 00:08:44.265 { 00:08:44.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.265 "dma_device_type": 2 00:08:44.265 } 00:08:44.265 ], 00:08:44.265 "driver_specific": {} 00:08:44.265 } 00:08:44.265 ] 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.265 01:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.265 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.265 "name": "Existed_Raid", 00:08:44.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.265 "strip_size_kb": 64, 00:08:44.265 "state": "configuring", 00:08:44.265 "raid_level": "concat", 00:08:44.265 "superblock": false, 00:08:44.265 "num_base_bdevs": 3, 00:08:44.265 "num_base_bdevs_discovered": 2, 00:08:44.265 "num_base_bdevs_operational": 3, 00:08:44.265 "base_bdevs_list": [ 00:08:44.265 { 00:08:44.265 "name": "BaseBdev1", 00:08:44.265 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:44.265 "is_configured": true, 00:08:44.265 "data_offset": 0, 00:08:44.265 "data_size": 65536 00:08:44.265 }, 00:08:44.265 { 00:08:44.265 "name": null, 00:08:44.265 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:44.265 "is_configured": false, 00:08:44.265 "data_offset": 0, 00:08:44.265 "data_size": 65536 00:08:44.265 }, 00:08:44.265 { 00:08:44.265 "name": "BaseBdev3", 00:08:44.265 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:44.265 "is_configured": true, 00:08:44.265 "data_offset": 0, 00:08:44.265 "data_size": 65536 00:08:44.265 } 00:08:44.265 ] 00:08:44.265 }' 00:08:44.265 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.265 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.524 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.524 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.524 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.524 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.524 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.784 [2024-10-09 01:28:43.438845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.784 "name": "Existed_Raid", 00:08:44.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.784 "strip_size_kb": 64, 00:08:44.784 "state": "configuring", 00:08:44.784 "raid_level": "concat", 00:08:44.784 "superblock": false, 00:08:44.784 "num_base_bdevs": 3, 00:08:44.784 "num_base_bdevs_discovered": 1, 00:08:44.784 "num_base_bdevs_operational": 3, 00:08:44.784 "base_bdevs_list": [ 00:08:44.784 { 00:08:44.784 "name": "BaseBdev1", 00:08:44.784 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:44.784 "is_configured": true, 00:08:44.784 "data_offset": 0, 00:08:44.784 "data_size": 65536 00:08:44.784 }, 00:08:44.784 { 00:08:44.784 "name": null, 00:08:44.784 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:44.784 "is_configured": false, 00:08:44.784 "data_offset": 0, 00:08:44.784 "data_size": 65536 00:08:44.784 }, 00:08:44.784 { 00:08:44.784 "name": null, 00:08:44.784 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:44.784 "is_configured": false, 00:08:44.784 "data_offset": 0, 00:08:44.784 "data_size": 65536 00:08:44.784 } 00:08:44.784 ] 00:08:44.784 }' 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.784 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.044 [2024-10-09 01:28:43.826955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.044 "name": "Existed_Raid", 00:08:45.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.044 "strip_size_kb": 64, 00:08:45.044 "state": "configuring", 00:08:45.044 "raid_level": "concat", 00:08:45.044 "superblock": false, 00:08:45.044 "num_base_bdevs": 3, 00:08:45.044 "num_base_bdevs_discovered": 2, 00:08:45.044 "num_base_bdevs_operational": 3, 00:08:45.044 "base_bdevs_list": [ 00:08:45.044 { 00:08:45.044 "name": "BaseBdev1", 00:08:45.044 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:45.044 "is_configured": true, 00:08:45.044 "data_offset": 0, 00:08:45.044 "data_size": 65536 00:08:45.044 }, 00:08:45.044 { 00:08:45.044 "name": null, 00:08:45.044 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:45.044 "is_configured": false, 00:08:45.044 "data_offset": 0, 00:08:45.044 "data_size": 65536 00:08:45.044 }, 00:08:45.044 { 00:08:45.044 "name": "BaseBdev3", 00:08:45.044 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:45.044 "is_configured": true, 00:08:45.044 "data_offset": 0, 00:08:45.044 "data_size": 65536 00:08:45.044 } 00:08:45.044 ] 00:08:45.044 }' 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.044 01:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.612 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:45.612 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.612 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.612 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.613 [2024-10-09 01:28:44.279123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.613 "name": "Existed_Raid", 00:08:45.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.613 "strip_size_kb": 64, 00:08:45.613 "state": "configuring", 00:08:45.613 "raid_level": "concat", 00:08:45.613 "superblock": false, 00:08:45.613 "num_base_bdevs": 3, 00:08:45.613 "num_base_bdevs_discovered": 1, 00:08:45.613 "num_base_bdevs_operational": 3, 00:08:45.613 "base_bdevs_list": [ 00:08:45.613 { 00:08:45.613 "name": null, 00:08:45.613 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:45.613 "is_configured": false, 00:08:45.613 "data_offset": 0, 00:08:45.613 "data_size": 65536 00:08:45.613 }, 00:08:45.613 { 00:08:45.613 "name": null, 00:08:45.613 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:45.613 "is_configured": false, 00:08:45.613 "data_offset": 0, 00:08:45.613 "data_size": 65536 00:08:45.613 }, 00:08:45.613 { 00:08:45.613 "name": "BaseBdev3", 00:08:45.613 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:45.613 "is_configured": true, 00:08:45.613 "data_offset": 0, 00:08:45.613 "data_size": 65536 00:08:45.613 } 00:08:45.613 ] 00:08:45.613 }' 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.613 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.872 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.872 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:45.872 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.872 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.872 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.132 [2024-10-09 01:28:44.778392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.132 "name": "Existed_Raid", 00:08:46.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.132 "strip_size_kb": 64, 00:08:46.132 "state": "configuring", 00:08:46.132 "raid_level": "concat", 00:08:46.132 "superblock": false, 00:08:46.132 "num_base_bdevs": 3, 00:08:46.132 "num_base_bdevs_discovered": 2, 00:08:46.132 "num_base_bdevs_operational": 3, 00:08:46.132 "base_bdevs_list": [ 00:08:46.132 { 00:08:46.132 "name": null, 00:08:46.132 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:46.132 "is_configured": false, 00:08:46.132 "data_offset": 0, 00:08:46.132 "data_size": 65536 00:08:46.132 }, 00:08:46.132 { 00:08:46.132 "name": "BaseBdev2", 00:08:46.132 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:46.132 "is_configured": true, 00:08:46.132 "data_offset": 0, 00:08:46.132 "data_size": 65536 00:08:46.132 }, 00:08:46.132 { 00:08:46.132 "name": "BaseBdev3", 00:08:46.132 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:46.132 "is_configured": true, 00:08:46.132 "data_offset": 0, 00:08:46.132 "data_size": 65536 00:08:46.132 } 00:08:46.132 ] 00:08:46.132 }' 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.132 01:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c07b9781-29fd-4fe7-b940-98df70d8aa12 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.392 [2024-10-09 01:28:45.242704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:46.392 [2024-10-09 01:28:45.242836] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:46.392 [2024-10-09 01:28:45.242863] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:46.392 [2024-10-09 01:28:45.243184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:46.392 [2024-10-09 01:28:45.243356] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:46.392 [2024-10-09 01:28:45.243404] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:46.392 [2024-10-09 01:28:45.243657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.392 NewBaseBdev 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.392 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.652 [ 00:08:46.652 { 00:08:46.652 "name": "NewBaseBdev", 00:08:46.652 "aliases": [ 00:08:46.652 "c07b9781-29fd-4fe7-b940-98df70d8aa12" 00:08:46.652 ], 00:08:46.652 "product_name": "Malloc disk", 00:08:46.652 "block_size": 512, 00:08:46.652 "num_blocks": 65536, 00:08:46.652 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:46.652 "assigned_rate_limits": { 00:08:46.652 "rw_ios_per_sec": 0, 00:08:46.652 "rw_mbytes_per_sec": 0, 00:08:46.652 "r_mbytes_per_sec": 0, 00:08:46.652 "w_mbytes_per_sec": 0 00:08:46.652 }, 00:08:46.652 "claimed": true, 00:08:46.652 "claim_type": "exclusive_write", 00:08:46.652 "zoned": false, 00:08:46.652 "supported_io_types": { 00:08:46.652 "read": true, 00:08:46.652 "write": true, 00:08:46.652 "unmap": true, 00:08:46.652 "flush": true, 00:08:46.652 "reset": true, 00:08:46.652 "nvme_admin": false, 00:08:46.652 "nvme_io": false, 00:08:46.652 "nvme_io_md": false, 00:08:46.652 "write_zeroes": true, 00:08:46.652 "zcopy": true, 00:08:46.652 "get_zone_info": false, 00:08:46.652 "zone_management": false, 00:08:46.652 "zone_append": false, 00:08:46.652 "compare": false, 00:08:46.652 "compare_and_write": false, 00:08:46.652 "abort": true, 00:08:46.652 "seek_hole": false, 00:08:46.652 "seek_data": false, 00:08:46.652 "copy": true, 00:08:46.652 "nvme_iov_md": false 00:08:46.652 }, 00:08:46.652 "memory_domains": [ 00:08:46.652 { 00:08:46.652 "dma_device_id": "system", 00:08:46.652 "dma_device_type": 1 00:08:46.652 }, 00:08:46.652 { 00:08:46.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.652 "dma_device_type": 2 00:08:46.652 } 00:08:46.652 ], 00:08:46.652 "driver_specific": {} 00:08:46.652 } 00:08:46.652 ] 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.652 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.652 "name": "Existed_Raid", 00:08:46.652 "uuid": "573c2837-a22e-43ae-aeac-eef400bccee3", 00:08:46.652 "strip_size_kb": 64, 00:08:46.652 "state": "online", 00:08:46.652 "raid_level": "concat", 00:08:46.652 "superblock": false, 00:08:46.652 "num_base_bdevs": 3, 00:08:46.652 "num_base_bdevs_discovered": 3, 00:08:46.652 "num_base_bdevs_operational": 3, 00:08:46.652 "base_bdevs_list": [ 00:08:46.652 { 00:08:46.652 "name": "NewBaseBdev", 00:08:46.652 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:46.652 "is_configured": true, 00:08:46.652 "data_offset": 0, 00:08:46.652 "data_size": 65536 00:08:46.652 }, 00:08:46.652 { 00:08:46.652 "name": "BaseBdev2", 00:08:46.652 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:46.652 "is_configured": true, 00:08:46.652 "data_offset": 0, 00:08:46.652 "data_size": 65536 00:08:46.652 }, 00:08:46.652 { 00:08:46.652 "name": "BaseBdev3", 00:08:46.652 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:46.652 "is_configured": true, 00:08:46.652 "data_offset": 0, 00:08:46.652 "data_size": 65536 00:08:46.652 } 00:08:46.652 ] 00:08:46.652 }' 00:08:46.653 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.653 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.913 [2024-10-09 01:28:45.651117] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.913 "name": "Existed_Raid", 00:08:46.913 "aliases": [ 00:08:46.913 "573c2837-a22e-43ae-aeac-eef400bccee3" 00:08:46.913 ], 00:08:46.913 "product_name": "Raid Volume", 00:08:46.913 "block_size": 512, 00:08:46.913 "num_blocks": 196608, 00:08:46.913 "uuid": "573c2837-a22e-43ae-aeac-eef400bccee3", 00:08:46.913 "assigned_rate_limits": { 00:08:46.913 "rw_ios_per_sec": 0, 00:08:46.913 "rw_mbytes_per_sec": 0, 00:08:46.913 "r_mbytes_per_sec": 0, 00:08:46.913 "w_mbytes_per_sec": 0 00:08:46.913 }, 00:08:46.913 "claimed": false, 00:08:46.913 "zoned": false, 00:08:46.913 "supported_io_types": { 00:08:46.913 "read": true, 00:08:46.913 "write": true, 00:08:46.913 "unmap": true, 00:08:46.913 "flush": true, 00:08:46.913 "reset": true, 00:08:46.913 "nvme_admin": false, 00:08:46.913 "nvme_io": false, 00:08:46.913 "nvme_io_md": false, 00:08:46.913 "write_zeroes": true, 00:08:46.913 "zcopy": false, 00:08:46.913 "get_zone_info": false, 00:08:46.913 "zone_management": false, 00:08:46.913 "zone_append": false, 00:08:46.913 "compare": false, 00:08:46.913 "compare_and_write": false, 00:08:46.913 "abort": false, 00:08:46.913 "seek_hole": false, 00:08:46.913 "seek_data": false, 00:08:46.913 "copy": false, 00:08:46.913 "nvme_iov_md": false 00:08:46.913 }, 00:08:46.913 "memory_domains": [ 00:08:46.913 { 00:08:46.913 "dma_device_id": "system", 00:08:46.913 "dma_device_type": 1 00:08:46.913 }, 00:08:46.913 { 00:08:46.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.913 "dma_device_type": 2 00:08:46.913 }, 00:08:46.913 { 00:08:46.913 "dma_device_id": "system", 00:08:46.913 "dma_device_type": 1 00:08:46.913 }, 00:08:46.913 { 00:08:46.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.913 "dma_device_type": 2 00:08:46.913 }, 00:08:46.913 { 00:08:46.913 "dma_device_id": "system", 00:08:46.913 "dma_device_type": 1 00:08:46.913 }, 00:08:46.913 { 00:08:46.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.913 "dma_device_type": 2 00:08:46.913 } 00:08:46.913 ], 00:08:46.913 "driver_specific": { 00:08:46.913 "raid": { 00:08:46.913 "uuid": "573c2837-a22e-43ae-aeac-eef400bccee3", 00:08:46.913 "strip_size_kb": 64, 00:08:46.913 "state": "online", 00:08:46.913 "raid_level": "concat", 00:08:46.913 "superblock": false, 00:08:46.913 "num_base_bdevs": 3, 00:08:46.913 "num_base_bdevs_discovered": 3, 00:08:46.913 "num_base_bdevs_operational": 3, 00:08:46.913 "base_bdevs_list": [ 00:08:46.913 { 00:08:46.913 "name": "NewBaseBdev", 00:08:46.913 "uuid": "c07b9781-29fd-4fe7-b940-98df70d8aa12", 00:08:46.913 "is_configured": true, 00:08:46.913 "data_offset": 0, 00:08:46.913 "data_size": 65536 00:08:46.913 }, 00:08:46.913 { 00:08:46.913 "name": "BaseBdev2", 00:08:46.913 "uuid": "8661638b-4872-4b4d-aab7-0337911ab05d", 00:08:46.913 "is_configured": true, 00:08:46.913 "data_offset": 0, 00:08:46.913 "data_size": 65536 00:08:46.913 }, 00:08:46.913 { 00:08:46.913 "name": "BaseBdev3", 00:08:46.913 "uuid": "0a73da23-86af-480d-8f29-7f7020471b45", 00:08:46.913 "is_configured": true, 00:08:46.913 "data_offset": 0, 00:08:46.913 "data_size": 65536 00:08:46.913 } 00:08:46.913 ] 00:08:46.913 } 00:08:46.913 } 00:08:46.913 }' 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:46.913 BaseBdev2 00:08:46.913 BaseBdev3' 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.913 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.173 [2024-10-09 01:28:45.878880] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.173 [2024-10-09 01:28:45.878908] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.173 [2024-10-09 01:28:45.878967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.173 [2024-10-09 01:28:45.879024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.173 [2024-10-09 01:28:45.879041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77856 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 77856 ']' 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 77856 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77856 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77856' 00:08:47.173 killing process with pid 77856 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 77856 00:08:47.173 [2024-10-09 01:28:45.916746] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.173 01:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 77856 00:08:47.173 [2024-10-09 01:28:45.974033] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.742 01:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:47.743 00:08:47.743 real 0m8.305s 00:08:47.743 user 0m13.743s 00:08:47.743 sys 0m1.721s 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.743 ************************************ 00:08:47.743 END TEST raid_state_function_test 00:08:47.743 ************************************ 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.743 01:28:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:47.743 01:28:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.743 01:28:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.743 01:28:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.743 ************************************ 00:08:47.743 START TEST raid_state_function_test_sb 00:08:47.743 ************************************ 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78455 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78455' 00:08:47.743 Process raid pid: 78455 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78455 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78455 ']' 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.743 01:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.743 [2024-10-09 01:28:46.504794] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:47.743 [2024-10-09 01:28:46.505009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.003 [2024-10-09 01:28:46.638205] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.003 [2024-10-09 01:28:46.668401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.003 [2024-10-09 01:28:46.737774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.003 [2024-10-09 01:28:46.813440] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.003 [2024-10-09 01:28:46.813488] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.572 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.572 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:48.572 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.572 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.572 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.572 [2024-10-09 01:28:47.338471] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.572 [2024-10-09 01:28:47.338546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.572 [2024-10-09 01:28:47.338563] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.572 [2024-10-09 01:28:47.338572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.572 [2024-10-09 01:28:47.338583] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.572 [2024-10-09 01:28:47.338590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.572 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.573 "name": "Existed_Raid", 00:08:48.573 "uuid": "54614f2b-22e5-41ef-8626-fe8b6e6eab7b", 00:08:48.573 "strip_size_kb": 64, 00:08:48.573 "state": "configuring", 00:08:48.573 "raid_level": "concat", 00:08:48.573 "superblock": true, 00:08:48.573 "num_base_bdevs": 3, 00:08:48.573 "num_base_bdevs_discovered": 0, 00:08:48.573 "num_base_bdevs_operational": 3, 00:08:48.573 "base_bdevs_list": [ 00:08:48.573 { 00:08:48.573 "name": "BaseBdev1", 00:08:48.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.573 "is_configured": false, 00:08:48.573 "data_offset": 0, 00:08:48.573 "data_size": 0 00:08:48.573 }, 00:08:48.573 { 00:08:48.573 "name": "BaseBdev2", 00:08:48.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.573 "is_configured": false, 00:08:48.573 "data_offset": 0, 00:08:48.573 "data_size": 0 00:08:48.573 }, 00:08:48.573 { 00:08:48.573 "name": "BaseBdev3", 00:08:48.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.573 "is_configured": false, 00:08:48.573 "data_offset": 0, 00:08:48.573 "data_size": 0 00:08:48.573 } 00:08:48.573 ] 00:08:48.573 }' 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.573 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.832 [2024-10-09 01:28:47.706427] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.832 [2024-10-09 01:28:47.706563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.832 [2024-10-09 01:28:47.718473] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.832 [2024-10-09 01:28:47.718565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.832 [2024-10-09 01:28:47.718596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.832 [2024-10-09 01:28:47.718616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.832 [2024-10-09 01:28:47.718636] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.832 [2024-10-09 01:28:47.718654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.832 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.092 [2024-10-09 01:28:47.745812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.092 BaseBdev1 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.092 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.092 [ 00:08:49.092 { 00:08:49.092 "name": "BaseBdev1", 00:08:49.092 "aliases": [ 00:08:49.092 "9f183719-e183-400a-8cc0-a67200525c27" 00:08:49.092 ], 00:08:49.092 "product_name": "Malloc disk", 00:08:49.092 "block_size": 512, 00:08:49.092 "num_blocks": 65536, 00:08:49.092 "uuid": "9f183719-e183-400a-8cc0-a67200525c27", 00:08:49.092 "assigned_rate_limits": { 00:08:49.092 "rw_ios_per_sec": 0, 00:08:49.092 "rw_mbytes_per_sec": 0, 00:08:49.092 "r_mbytes_per_sec": 0, 00:08:49.092 "w_mbytes_per_sec": 0 00:08:49.092 }, 00:08:49.092 "claimed": true, 00:08:49.092 "claim_type": "exclusive_write", 00:08:49.092 "zoned": false, 00:08:49.092 "supported_io_types": { 00:08:49.092 "read": true, 00:08:49.092 "write": true, 00:08:49.092 "unmap": true, 00:08:49.092 "flush": true, 00:08:49.093 "reset": true, 00:08:49.093 "nvme_admin": false, 00:08:49.093 "nvme_io": false, 00:08:49.093 "nvme_io_md": false, 00:08:49.093 "write_zeroes": true, 00:08:49.093 "zcopy": true, 00:08:49.093 "get_zone_info": false, 00:08:49.093 "zone_management": false, 00:08:49.093 "zone_append": false, 00:08:49.093 "compare": false, 00:08:49.093 "compare_and_write": false, 00:08:49.093 "abort": true, 00:08:49.093 "seek_hole": false, 00:08:49.093 "seek_data": false, 00:08:49.093 "copy": true, 00:08:49.093 "nvme_iov_md": false 00:08:49.093 }, 00:08:49.093 "memory_domains": [ 00:08:49.093 { 00:08:49.093 "dma_device_id": "system", 00:08:49.093 "dma_device_type": 1 00:08:49.093 }, 00:08:49.093 { 00:08:49.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.093 "dma_device_type": 2 00:08:49.093 } 00:08:49.093 ], 00:08:49.093 "driver_specific": {} 00:08:49.093 } 00:08:49.093 ] 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.093 "name": "Existed_Raid", 00:08:49.093 "uuid": "9d1dd2ea-a29e-4d09-a577-f00185575e40", 00:08:49.093 "strip_size_kb": 64, 00:08:49.093 "state": "configuring", 00:08:49.093 "raid_level": "concat", 00:08:49.093 "superblock": true, 00:08:49.093 "num_base_bdevs": 3, 00:08:49.093 "num_base_bdevs_discovered": 1, 00:08:49.093 "num_base_bdevs_operational": 3, 00:08:49.093 "base_bdevs_list": [ 00:08:49.093 { 00:08:49.093 "name": "BaseBdev1", 00:08:49.093 "uuid": "9f183719-e183-400a-8cc0-a67200525c27", 00:08:49.093 "is_configured": true, 00:08:49.093 "data_offset": 2048, 00:08:49.093 "data_size": 63488 00:08:49.093 }, 00:08:49.093 { 00:08:49.093 "name": "BaseBdev2", 00:08:49.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.093 "is_configured": false, 00:08:49.093 "data_offset": 0, 00:08:49.093 "data_size": 0 00:08:49.093 }, 00:08:49.093 { 00:08:49.093 "name": "BaseBdev3", 00:08:49.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.093 "is_configured": false, 00:08:49.093 "data_offset": 0, 00:08:49.093 "data_size": 0 00:08:49.093 } 00:08:49.093 ] 00:08:49.093 }' 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.093 01:28:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.353 [2024-10-09 01:28:48.213966] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.353 [2024-10-09 01:28:48.214031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.353 [2024-10-09 01:28:48.225991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.353 [2024-10-09 01:28:48.228073] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.353 [2024-10-09 01:28:48.228166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.353 [2024-10-09 01:28:48.228184] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.353 [2024-10-09 01:28:48.228192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.353 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.613 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.613 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.613 "name": "Existed_Raid", 00:08:49.613 "uuid": "c73d0021-8d94-441a-88e4-d7b32cc4fdb5", 00:08:49.613 "strip_size_kb": 64, 00:08:49.613 "state": "configuring", 00:08:49.613 "raid_level": "concat", 00:08:49.613 "superblock": true, 00:08:49.613 "num_base_bdevs": 3, 00:08:49.613 "num_base_bdevs_discovered": 1, 00:08:49.613 "num_base_bdevs_operational": 3, 00:08:49.613 "base_bdevs_list": [ 00:08:49.613 { 00:08:49.613 "name": "BaseBdev1", 00:08:49.613 "uuid": "9f183719-e183-400a-8cc0-a67200525c27", 00:08:49.613 "is_configured": true, 00:08:49.613 "data_offset": 2048, 00:08:49.613 "data_size": 63488 00:08:49.613 }, 00:08:49.613 { 00:08:49.613 "name": "BaseBdev2", 00:08:49.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.613 "is_configured": false, 00:08:49.613 "data_offset": 0, 00:08:49.613 "data_size": 0 00:08:49.613 }, 00:08:49.613 { 00:08:49.613 "name": "BaseBdev3", 00:08:49.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.613 "is_configured": false, 00:08:49.613 "data_offset": 0, 00:08:49.613 "data_size": 0 00:08:49.613 } 00:08:49.613 ] 00:08:49.613 }' 00:08:49.613 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.613 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.873 [2024-10-09 01:28:48.708922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.873 BaseBdev2 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.873 [ 00:08:49.873 { 00:08:49.873 "name": "BaseBdev2", 00:08:49.873 "aliases": [ 00:08:49.873 "d7b668d7-9119-4723-9504-428653b3f82f" 00:08:49.873 ], 00:08:49.873 "product_name": "Malloc disk", 00:08:49.873 "block_size": 512, 00:08:49.873 "num_blocks": 65536, 00:08:49.873 "uuid": "d7b668d7-9119-4723-9504-428653b3f82f", 00:08:49.873 "assigned_rate_limits": { 00:08:49.873 "rw_ios_per_sec": 0, 00:08:49.873 "rw_mbytes_per_sec": 0, 00:08:49.873 "r_mbytes_per_sec": 0, 00:08:49.873 "w_mbytes_per_sec": 0 00:08:49.873 }, 00:08:49.873 "claimed": true, 00:08:49.873 "claim_type": "exclusive_write", 00:08:49.873 "zoned": false, 00:08:49.873 "supported_io_types": { 00:08:49.873 "read": true, 00:08:49.873 "write": true, 00:08:49.873 "unmap": true, 00:08:49.873 "flush": true, 00:08:49.873 "reset": true, 00:08:49.873 "nvme_admin": false, 00:08:49.873 "nvme_io": false, 00:08:49.873 "nvme_io_md": false, 00:08:49.873 "write_zeroes": true, 00:08:49.873 "zcopy": true, 00:08:49.873 "get_zone_info": false, 00:08:49.873 "zone_management": false, 00:08:49.873 "zone_append": false, 00:08:49.873 "compare": false, 00:08:49.873 "compare_and_write": false, 00:08:49.873 "abort": true, 00:08:49.873 "seek_hole": false, 00:08:49.873 "seek_data": false, 00:08:49.873 "copy": true, 00:08:49.873 "nvme_iov_md": false 00:08:49.873 }, 00:08:49.873 "memory_domains": [ 00:08:49.873 { 00:08:49.873 "dma_device_id": "system", 00:08:49.873 "dma_device_type": 1 00:08:49.873 }, 00:08:49.873 { 00:08:49.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.873 "dma_device_type": 2 00:08:49.873 } 00:08:49.873 ], 00:08:49.873 "driver_specific": {} 00:08:49.873 } 00:08:49.873 ] 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.873 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.874 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.874 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.874 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.874 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.874 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.133 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.133 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.133 "name": "Existed_Raid", 00:08:50.133 "uuid": "c73d0021-8d94-441a-88e4-d7b32cc4fdb5", 00:08:50.133 "strip_size_kb": 64, 00:08:50.133 "state": "configuring", 00:08:50.133 "raid_level": "concat", 00:08:50.133 "superblock": true, 00:08:50.133 "num_base_bdevs": 3, 00:08:50.133 "num_base_bdevs_discovered": 2, 00:08:50.133 "num_base_bdevs_operational": 3, 00:08:50.133 "base_bdevs_list": [ 00:08:50.133 { 00:08:50.133 "name": "BaseBdev1", 00:08:50.133 "uuid": "9f183719-e183-400a-8cc0-a67200525c27", 00:08:50.133 "is_configured": true, 00:08:50.133 "data_offset": 2048, 00:08:50.133 "data_size": 63488 00:08:50.133 }, 00:08:50.133 { 00:08:50.133 "name": "BaseBdev2", 00:08:50.133 "uuid": "d7b668d7-9119-4723-9504-428653b3f82f", 00:08:50.133 "is_configured": true, 00:08:50.133 "data_offset": 2048, 00:08:50.133 "data_size": 63488 00:08:50.133 }, 00:08:50.133 { 00:08:50.133 "name": "BaseBdev3", 00:08:50.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.133 "is_configured": false, 00:08:50.133 "data_offset": 0, 00:08:50.133 "data_size": 0 00:08:50.133 } 00:08:50.133 ] 00:08:50.133 }' 00:08:50.133 01:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.133 01:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.393 [2024-10-09 01:28:49.241571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.393 [2024-10-09 01:28:49.241778] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.393 [2024-10-09 01:28:49.241794] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:50.393 [2024-10-09 01:28:49.242098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:50.393 BaseBdev3 00:08:50.393 [2024-10-09 01:28:49.242224] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.393 [2024-10-09 01:28:49.242243] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:50.393 [2024-10-09 01:28:49.242368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.393 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.393 [ 00:08:50.393 { 00:08:50.393 "name": "BaseBdev3", 00:08:50.393 "aliases": [ 00:08:50.393 "23d63df1-0284-409f-b3bd-ef5c65ceae13" 00:08:50.393 ], 00:08:50.393 "product_name": "Malloc disk", 00:08:50.393 "block_size": 512, 00:08:50.393 "num_blocks": 65536, 00:08:50.393 "uuid": "23d63df1-0284-409f-b3bd-ef5c65ceae13", 00:08:50.393 "assigned_rate_limits": { 00:08:50.393 "rw_ios_per_sec": 0, 00:08:50.393 "rw_mbytes_per_sec": 0, 00:08:50.393 "r_mbytes_per_sec": 0, 00:08:50.393 "w_mbytes_per_sec": 0 00:08:50.393 }, 00:08:50.393 "claimed": true, 00:08:50.393 "claim_type": "exclusive_write", 00:08:50.393 "zoned": false, 00:08:50.393 "supported_io_types": { 00:08:50.393 "read": true, 00:08:50.393 "write": true, 00:08:50.393 "unmap": true, 00:08:50.393 "flush": true, 00:08:50.394 "reset": true, 00:08:50.394 "nvme_admin": false, 00:08:50.394 "nvme_io": false, 00:08:50.394 "nvme_io_md": false, 00:08:50.394 "write_zeroes": true, 00:08:50.394 "zcopy": true, 00:08:50.394 "get_zone_info": false, 00:08:50.394 "zone_management": false, 00:08:50.394 "zone_append": false, 00:08:50.394 "compare": false, 00:08:50.394 "compare_and_write": false, 00:08:50.394 "abort": true, 00:08:50.394 "seek_hole": false, 00:08:50.394 "seek_data": false, 00:08:50.394 "copy": true, 00:08:50.394 "nvme_iov_md": false 00:08:50.394 }, 00:08:50.394 "memory_domains": [ 00:08:50.394 { 00:08:50.394 "dma_device_id": "system", 00:08:50.394 "dma_device_type": 1 00:08:50.394 }, 00:08:50.394 { 00:08:50.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.394 "dma_device_type": 2 00:08:50.394 } 00:08:50.394 ], 00:08:50.394 "driver_specific": {} 00:08:50.394 } 00:08:50.394 ] 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.394 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.653 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.653 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.653 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.653 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.653 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.653 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.653 "name": "Existed_Raid", 00:08:50.653 "uuid": "c73d0021-8d94-441a-88e4-d7b32cc4fdb5", 00:08:50.653 "strip_size_kb": 64, 00:08:50.653 "state": "online", 00:08:50.653 "raid_level": "concat", 00:08:50.653 "superblock": true, 00:08:50.653 "num_base_bdevs": 3, 00:08:50.653 "num_base_bdevs_discovered": 3, 00:08:50.653 "num_base_bdevs_operational": 3, 00:08:50.653 "base_bdevs_list": [ 00:08:50.653 { 00:08:50.653 "name": "BaseBdev1", 00:08:50.653 "uuid": "9f183719-e183-400a-8cc0-a67200525c27", 00:08:50.653 "is_configured": true, 00:08:50.653 "data_offset": 2048, 00:08:50.653 "data_size": 63488 00:08:50.653 }, 00:08:50.653 { 00:08:50.653 "name": "BaseBdev2", 00:08:50.653 "uuid": "d7b668d7-9119-4723-9504-428653b3f82f", 00:08:50.653 "is_configured": true, 00:08:50.653 "data_offset": 2048, 00:08:50.653 "data_size": 63488 00:08:50.653 }, 00:08:50.653 { 00:08:50.653 "name": "BaseBdev3", 00:08:50.653 "uuid": "23d63df1-0284-409f-b3bd-ef5c65ceae13", 00:08:50.653 "is_configured": true, 00:08:50.653 "data_offset": 2048, 00:08:50.653 "data_size": 63488 00:08:50.653 } 00:08:50.653 ] 00:08:50.653 }' 00:08:50.653 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.653 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.913 [2024-10-09 01:28:49.694005] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.913 "name": "Existed_Raid", 00:08:50.913 "aliases": [ 00:08:50.913 "c73d0021-8d94-441a-88e4-d7b32cc4fdb5" 00:08:50.913 ], 00:08:50.913 "product_name": "Raid Volume", 00:08:50.913 "block_size": 512, 00:08:50.913 "num_blocks": 190464, 00:08:50.913 "uuid": "c73d0021-8d94-441a-88e4-d7b32cc4fdb5", 00:08:50.913 "assigned_rate_limits": { 00:08:50.913 "rw_ios_per_sec": 0, 00:08:50.913 "rw_mbytes_per_sec": 0, 00:08:50.913 "r_mbytes_per_sec": 0, 00:08:50.913 "w_mbytes_per_sec": 0 00:08:50.913 }, 00:08:50.913 "claimed": false, 00:08:50.913 "zoned": false, 00:08:50.913 "supported_io_types": { 00:08:50.913 "read": true, 00:08:50.913 "write": true, 00:08:50.913 "unmap": true, 00:08:50.913 "flush": true, 00:08:50.913 "reset": true, 00:08:50.913 "nvme_admin": false, 00:08:50.913 "nvme_io": false, 00:08:50.913 "nvme_io_md": false, 00:08:50.913 "write_zeroes": true, 00:08:50.913 "zcopy": false, 00:08:50.913 "get_zone_info": false, 00:08:50.913 "zone_management": false, 00:08:50.913 "zone_append": false, 00:08:50.913 "compare": false, 00:08:50.913 "compare_and_write": false, 00:08:50.913 "abort": false, 00:08:50.913 "seek_hole": false, 00:08:50.913 "seek_data": false, 00:08:50.913 "copy": false, 00:08:50.913 "nvme_iov_md": false 00:08:50.913 }, 00:08:50.913 "memory_domains": [ 00:08:50.913 { 00:08:50.913 "dma_device_id": "system", 00:08:50.913 "dma_device_type": 1 00:08:50.913 }, 00:08:50.913 { 00:08:50.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.913 "dma_device_type": 2 00:08:50.913 }, 00:08:50.913 { 00:08:50.913 "dma_device_id": "system", 00:08:50.913 "dma_device_type": 1 00:08:50.913 }, 00:08:50.913 { 00:08:50.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.913 "dma_device_type": 2 00:08:50.913 }, 00:08:50.913 { 00:08:50.913 "dma_device_id": "system", 00:08:50.913 "dma_device_type": 1 00:08:50.913 }, 00:08:50.913 { 00:08:50.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.913 "dma_device_type": 2 00:08:50.913 } 00:08:50.913 ], 00:08:50.913 "driver_specific": { 00:08:50.913 "raid": { 00:08:50.913 "uuid": "c73d0021-8d94-441a-88e4-d7b32cc4fdb5", 00:08:50.913 "strip_size_kb": 64, 00:08:50.913 "state": "online", 00:08:50.913 "raid_level": "concat", 00:08:50.913 "superblock": true, 00:08:50.913 "num_base_bdevs": 3, 00:08:50.913 "num_base_bdevs_discovered": 3, 00:08:50.913 "num_base_bdevs_operational": 3, 00:08:50.913 "base_bdevs_list": [ 00:08:50.913 { 00:08:50.913 "name": "BaseBdev1", 00:08:50.913 "uuid": "9f183719-e183-400a-8cc0-a67200525c27", 00:08:50.913 "is_configured": true, 00:08:50.913 "data_offset": 2048, 00:08:50.913 "data_size": 63488 00:08:50.913 }, 00:08:50.913 { 00:08:50.913 "name": "BaseBdev2", 00:08:50.913 "uuid": "d7b668d7-9119-4723-9504-428653b3f82f", 00:08:50.913 "is_configured": true, 00:08:50.913 "data_offset": 2048, 00:08:50.913 "data_size": 63488 00:08:50.913 }, 00:08:50.913 { 00:08:50.913 "name": "BaseBdev3", 00:08:50.913 "uuid": "23d63df1-0284-409f-b3bd-ef5c65ceae13", 00:08:50.913 "is_configured": true, 00:08:50.913 "data_offset": 2048, 00:08:50.913 "data_size": 63488 00:08:50.913 } 00:08:50.913 ] 00:08:50.913 } 00:08:50.913 } 00:08:50.913 }' 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.913 BaseBdev2 00:08:50.913 BaseBdev3' 00:08:50.913 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.174 [2024-10-09 01:28:49.953838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.174 [2024-10-09 01:28:49.953868] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.174 [2024-10-09 01:28:49.953920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.174 01:28:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.174 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.174 "name": "Existed_Raid", 00:08:51.174 "uuid": "c73d0021-8d94-441a-88e4-d7b32cc4fdb5", 00:08:51.174 "strip_size_kb": 64, 00:08:51.174 "state": "offline", 00:08:51.174 "raid_level": "concat", 00:08:51.174 "superblock": true, 00:08:51.174 "num_base_bdevs": 3, 00:08:51.174 "num_base_bdevs_discovered": 2, 00:08:51.174 "num_base_bdevs_operational": 2, 00:08:51.174 "base_bdevs_list": [ 00:08:51.174 { 00:08:51.174 "name": null, 00:08:51.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.174 "is_configured": false, 00:08:51.174 "data_offset": 0, 00:08:51.174 "data_size": 63488 00:08:51.174 }, 00:08:51.174 { 00:08:51.174 "name": "BaseBdev2", 00:08:51.174 "uuid": "d7b668d7-9119-4723-9504-428653b3f82f", 00:08:51.174 "is_configured": true, 00:08:51.174 "data_offset": 2048, 00:08:51.174 "data_size": 63488 00:08:51.174 }, 00:08:51.174 { 00:08:51.174 "name": "BaseBdev3", 00:08:51.174 "uuid": "23d63df1-0284-409f-b3bd-ef5c65ceae13", 00:08:51.174 "is_configured": true, 00:08:51.174 "data_offset": 2048, 00:08:51.174 "data_size": 63488 00:08:51.174 } 00:08:51.174 ] 00:08:51.174 }' 00:08:51.174 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.174 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.744 [2024-10-09 01:28:50.386699] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.744 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 [2024-10-09 01:28:50.450739] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.745 [2024-10-09 01:28:50.450794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 BaseBdev2 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 [ 00:08:51.745 { 00:08:51.745 "name": "BaseBdev2", 00:08:51.745 "aliases": [ 00:08:51.745 "fa874a64-397b-4fb1-a537-6009a2745b5b" 00:08:51.745 ], 00:08:51.745 "product_name": "Malloc disk", 00:08:51.745 "block_size": 512, 00:08:51.745 "num_blocks": 65536, 00:08:51.745 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:51.745 "assigned_rate_limits": { 00:08:51.745 "rw_ios_per_sec": 0, 00:08:51.745 "rw_mbytes_per_sec": 0, 00:08:51.745 "r_mbytes_per_sec": 0, 00:08:51.745 "w_mbytes_per_sec": 0 00:08:51.745 }, 00:08:51.745 "claimed": false, 00:08:51.745 "zoned": false, 00:08:51.745 "supported_io_types": { 00:08:51.745 "read": true, 00:08:51.745 "write": true, 00:08:51.745 "unmap": true, 00:08:51.745 "flush": true, 00:08:51.745 "reset": true, 00:08:51.745 "nvme_admin": false, 00:08:51.745 "nvme_io": false, 00:08:51.745 "nvme_io_md": false, 00:08:51.745 "write_zeroes": true, 00:08:51.745 "zcopy": true, 00:08:51.745 "get_zone_info": false, 00:08:51.745 "zone_management": false, 00:08:51.745 "zone_append": false, 00:08:51.745 "compare": false, 00:08:51.745 "compare_and_write": false, 00:08:51.745 "abort": true, 00:08:51.745 "seek_hole": false, 00:08:51.745 "seek_data": false, 00:08:51.745 "copy": true, 00:08:51.745 "nvme_iov_md": false 00:08:51.745 }, 00:08:51.745 "memory_domains": [ 00:08:51.745 { 00:08:51.745 "dma_device_id": "system", 00:08:51.745 "dma_device_type": 1 00:08:51.745 }, 00:08:51.745 { 00:08:51.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.745 "dma_device_type": 2 00:08:51.745 } 00:08:51.745 ], 00:08:51.745 "driver_specific": {} 00:08:51.745 } 00:08:51.745 ] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 BaseBdev3 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.745 [ 00:08:51.745 { 00:08:51.745 "name": "BaseBdev3", 00:08:51.745 "aliases": [ 00:08:51.745 "3dfb2a41-2784-4ea1-9005-53fc2af0b04a" 00:08:51.745 ], 00:08:51.745 "product_name": "Malloc disk", 00:08:51.745 "block_size": 512, 00:08:51.745 "num_blocks": 65536, 00:08:51.745 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:51.745 "assigned_rate_limits": { 00:08:51.745 "rw_ios_per_sec": 0, 00:08:51.745 "rw_mbytes_per_sec": 0, 00:08:51.745 "r_mbytes_per_sec": 0, 00:08:51.745 "w_mbytes_per_sec": 0 00:08:51.745 }, 00:08:51.745 "claimed": false, 00:08:51.745 "zoned": false, 00:08:51.745 "supported_io_types": { 00:08:51.745 "read": true, 00:08:51.745 "write": true, 00:08:51.745 "unmap": true, 00:08:51.745 "flush": true, 00:08:51.745 "reset": true, 00:08:51.745 "nvme_admin": false, 00:08:51.745 "nvme_io": false, 00:08:51.745 "nvme_io_md": false, 00:08:51.745 "write_zeroes": true, 00:08:51.745 "zcopy": true, 00:08:51.745 "get_zone_info": false, 00:08:51.745 "zone_management": false, 00:08:51.745 "zone_append": false, 00:08:51.745 "compare": false, 00:08:51.745 "compare_and_write": false, 00:08:51.745 "abort": true, 00:08:51.745 "seek_hole": false, 00:08:51.745 "seek_data": false, 00:08:51.745 "copy": true, 00:08:51.745 "nvme_iov_md": false 00:08:51.745 }, 00:08:51.745 "memory_domains": [ 00:08:51.745 { 00:08:51.745 "dma_device_id": "system", 00:08:51.745 "dma_device_type": 1 00:08:51.745 }, 00:08:51.745 { 00:08:51.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.745 "dma_device_type": 2 00:08:51.745 } 00:08:51.745 ], 00:08:51.745 "driver_specific": {} 00:08:51.745 } 00:08:51.745 ] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.745 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.005 [2024-10-09 01:28:50.638568] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.005 [2024-10-09 01:28:50.638621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.005 [2024-10-09 01:28:50.638640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.005 [2024-10-09 01:28:50.640825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.005 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.006 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.006 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.006 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.006 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.006 "name": "Existed_Raid", 00:08:52.006 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:52.006 "strip_size_kb": 64, 00:08:52.006 "state": "configuring", 00:08:52.006 "raid_level": "concat", 00:08:52.006 "superblock": true, 00:08:52.006 "num_base_bdevs": 3, 00:08:52.006 "num_base_bdevs_discovered": 2, 00:08:52.006 "num_base_bdevs_operational": 3, 00:08:52.006 "base_bdevs_list": [ 00:08:52.006 { 00:08:52.006 "name": "BaseBdev1", 00:08:52.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.006 "is_configured": false, 00:08:52.006 "data_offset": 0, 00:08:52.006 "data_size": 0 00:08:52.006 }, 00:08:52.006 { 00:08:52.006 "name": "BaseBdev2", 00:08:52.006 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:52.006 "is_configured": true, 00:08:52.006 "data_offset": 2048, 00:08:52.006 "data_size": 63488 00:08:52.006 }, 00:08:52.006 { 00:08:52.006 "name": "BaseBdev3", 00:08:52.006 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:52.006 "is_configured": true, 00:08:52.006 "data_offset": 2048, 00:08:52.006 "data_size": 63488 00:08:52.006 } 00:08:52.006 ] 00:08:52.006 }' 00:08:52.006 01:28:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.006 01:28:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.265 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:52.265 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.265 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.265 [2024-10-09 01:28:51.046609] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.266 "name": "Existed_Raid", 00:08:52.266 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:52.266 "strip_size_kb": 64, 00:08:52.266 "state": "configuring", 00:08:52.266 "raid_level": "concat", 00:08:52.266 "superblock": true, 00:08:52.266 "num_base_bdevs": 3, 00:08:52.266 "num_base_bdevs_discovered": 1, 00:08:52.266 "num_base_bdevs_operational": 3, 00:08:52.266 "base_bdevs_list": [ 00:08:52.266 { 00:08:52.266 "name": "BaseBdev1", 00:08:52.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.266 "is_configured": false, 00:08:52.266 "data_offset": 0, 00:08:52.266 "data_size": 0 00:08:52.266 }, 00:08:52.266 { 00:08:52.266 "name": null, 00:08:52.266 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:52.266 "is_configured": false, 00:08:52.266 "data_offset": 0, 00:08:52.266 "data_size": 63488 00:08:52.266 }, 00:08:52.266 { 00:08:52.266 "name": "BaseBdev3", 00:08:52.266 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:52.266 "is_configured": true, 00:08:52.266 "data_offset": 2048, 00:08:52.266 "data_size": 63488 00:08:52.266 } 00:08:52.266 ] 00:08:52.266 }' 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.266 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.835 [2024-10-09 01:28:51.543375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.835 BaseBdev1 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.835 [ 00:08:52.835 { 00:08:52.835 "name": "BaseBdev1", 00:08:52.835 "aliases": [ 00:08:52.835 "e1d65e15-4a7b-49f7-9706-570cdb4c5f00" 00:08:52.835 ], 00:08:52.835 "product_name": "Malloc disk", 00:08:52.835 "block_size": 512, 00:08:52.835 "num_blocks": 65536, 00:08:52.835 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:52.835 "assigned_rate_limits": { 00:08:52.835 "rw_ios_per_sec": 0, 00:08:52.835 "rw_mbytes_per_sec": 0, 00:08:52.835 "r_mbytes_per_sec": 0, 00:08:52.835 "w_mbytes_per_sec": 0 00:08:52.835 }, 00:08:52.835 "claimed": true, 00:08:52.835 "claim_type": "exclusive_write", 00:08:52.835 "zoned": false, 00:08:52.835 "supported_io_types": { 00:08:52.835 "read": true, 00:08:52.835 "write": true, 00:08:52.835 "unmap": true, 00:08:52.835 "flush": true, 00:08:52.835 "reset": true, 00:08:52.835 "nvme_admin": false, 00:08:52.835 "nvme_io": false, 00:08:52.835 "nvme_io_md": false, 00:08:52.835 "write_zeroes": true, 00:08:52.835 "zcopy": true, 00:08:52.835 "get_zone_info": false, 00:08:52.835 "zone_management": false, 00:08:52.835 "zone_append": false, 00:08:52.835 "compare": false, 00:08:52.835 "compare_and_write": false, 00:08:52.835 "abort": true, 00:08:52.835 "seek_hole": false, 00:08:52.835 "seek_data": false, 00:08:52.835 "copy": true, 00:08:52.835 "nvme_iov_md": false 00:08:52.835 }, 00:08:52.835 "memory_domains": [ 00:08:52.835 { 00:08:52.835 "dma_device_id": "system", 00:08:52.835 "dma_device_type": 1 00:08:52.835 }, 00:08:52.835 { 00:08:52.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.835 "dma_device_type": 2 00:08:52.835 } 00:08:52.835 ], 00:08:52.835 "driver_specific": {} 00:08:52.835 } 00:08:52.835 ] 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.835 "name": "Existed_Raid", 00:08:52.835 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:52.835 "strip_size_kb": 64, 00:08:52.835 "state": "configuring", 00:08:52.835 "raid_level": "concat", 00:08:52.835 "superblock": true, 00:08:52.835 "num_base_bdevs": 3, 00:08:52.835 "num_base_bdevs_discovered": 2, 00:08:52.835 "num_base_bdevs_operational": 3, 00:08:52.835 "base_bdevs_list": [ 00:08:52.835 { 00:08:52.835 "name": "BaseBdev1", 00:08:52.835 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:52.835 "is_configured": true, 00:08:52.835 "data_offset": 2048, 00:08:52.835 "data_size": 63488 00:08:52.835 }, 00:08:52.835 { 00:08:52.835 "name": null, 00:08:52.835 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:52.835 "is_configured": false, 00:08:52.835 "data_offset": 0, 00:08:52.835 "data_size": 63488 00:08:52.835 }, 00:08:52.835 { 00:08:52.835 "name": "BaseBdev3", 00:08:52.835 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:52.835 "is_configured": true, 00:08:52.835 "data_offset": 2048, 00:08:52.835 "data_size": 63488 00:08:52.835 } 00:08:52.835 ] 00:08:52.835 }' 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.835 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.407 01:28:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.407 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 01:28:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 [2024-10-09 01:28:52.047562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.407 "name": "Existed_Raid", 00:08:53.407 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:53.407 "strip_size_kb": 64, 00:08:53.407 "state": "configuring", 00:08:53.407 "raid_level": "concat", 00:08:53.407 "superblock": true, 00:08:53.407 "num_base_bdevs": 3, 00:08:53.407 "num_base_bdevs_discovered": 1, 00:08:53.407 "num_base_bdevs_operational": 3, 00:08:53.407 "base_bdevs_list": [ 00:08:53.407 { 00:08:53.407 "name": "BaseBdev1", 00:08:53.407 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:53.407 "is_configured": true, 00:08:53.407 "data_offset": 2048, 00:08:53.407 "data_size": 63488 00:08:53.407 }, 00:08:53.407 { 00:08:53.407 "name": null, 00:08:53.407 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:53.407 "is_configured": false, 00:08:53.407 "data_offset": 0, 00:08:53.407 "data_size": 63488 00:08:53.407 }, 00:08:53.407 { 00:08:53.407 "name": null, 00:08:53.407 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:53.407 "is_configured": false, 00:08:53.407 "data_offset": 0, 00:08:53.407 "data_size": 63488 00:08:53.407 } 00:08:53.407 ] 00:08:53.407 }' 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.407 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.676 [2024-10-09 01:28:52.547714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.676 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.952 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.952 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.952 "name": "Existed_Raid", 00:08:53.952 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:53.952 "strip_size_kb": 64, 00:08:53.952 "state": "configuring", 00:08:53.952 "raid_level": "concat", 00:08:53.952 "superblock": true, 00:08:53.952 "num_base_bdevs": 3, 00:08:53.952 "num_base_bdevs_discovered": 2, 00:08:53.952 "num_base_bdevs_operational": 3, 00:08:53.952 "base_bdevs_list": [ 00:08:53.952 { 00:08:53.952 "name": "BaseBdev1", 00:08:53.952 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:53.952 "is_configured": true, 00:08:53.952 "data_offset": 2048, 00:08:53.952 "data_size": 63488 00:08:53.952 }, 00:08:53.952 { 00:08:53.952 "name": null, 00:08:53.952 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:53.952 "is_configured": false, 00:08:53.952 "data_offset": 0, 00:08:53.952 "data_size": 63488 00:08:53.952 }, 00:08:53.952 { 00:08:53.952 "name": "BaseBdev3", 00:08:53.952 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:53.952 "is_configured": true, 00:08:53.952 "data_offset": 2048, 00:08:53.952 "data_size": 63488 00:08:53.952 } 00:08:53.952 ] 00:08:53.952 }' 00:08:53.952 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.952 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.212 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.212 01:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:54.212 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.212 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.212 01:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.212 [2024-10-09 01:28:53.011861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.212 "name": "Existed_Raid", 00:08:54.212 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:54.212 "strip_size_kb": 64, 00:08:54.212 "state": "configuring", 00:08:54.212 "raid_level": "concat", 00:08:54.212 "superblock": true, 00:08:54.212 "num_base_bdevs": 3, 00:08:54.212 "num_base_bdevs_discovered": 1, 00:08:54.212 "num_base_bdevs_operational": 3, 00:08:54.212 "base_bdevs_list": [ 00:08:54.212 { 00:08:54.212 "name": null, 00:08:54.212 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:54.212 "is_configured": false, 00:08:54.212 "data_offset": 0, 00:08:54.212 "data_size": 63488 00:08:54.212 }, 00:08:54.212 { 00:08:54.212 "name": null, 00:08:54.212 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:54.212 "is_configured": false, 00:08:54.212 "data_offset": 0, 00:08:54.212 "data_size": 63488 00:08:54.212 }, 00:08:54.212 { 00:08:54.212 "name": "BaseBdev3", 00:08:54.212 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:54.212 "is_configured": true, 00:08:54.212 "data_offset": 2048, 00:08:54.212 "data_size": 63488 00:08:54.212 } 00:08:54.212 ] 00:08:54.212 }' 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.212 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.781 [2024-10-09 01:28:53.528072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.781 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.782 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.782 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.782 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.782 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.782 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.782 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.782 "name": "Existed_Raid", 00:08:54.782 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:54.782 "strip_size_kb": 64, 00:08:54.782 "state": "configuring", 00:08:54.782 "raid_level": "concat", 00:08:54.782 "superblock": true, 00:08:54.782 "num_base_bdevs": 3, 00:08:54.782 "num_base_bdevs_discovered": 2, 00:08:54.782 "num_base_bdevs_operational": 3, 00:08:54.782 "base_bdevs_list": [ 00:08:54.782 { 00:08:54.782 "name": null, 00:08:54.782 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:54.782 "is_configured": false, 00:08:54.782 "data_offset": 0, 00:08:54.782 "data_size": 63488 00:08:54.782 }, 00:08:54.782 { 00:08:54.782 "name": "BaseBdev2", 00:08:54.782 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:54.782 "is_configured": true, 00:08:54.782 "data_offset": 2048, 00:08:54.782 "data_size": 63488 00:08:54.782 }, 00:08:54.782 { 00:08:54.782 "name": "BaseBdev3", 00:08:54.782 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:54.782 "is_configured": true, 00:08:54.782 "data_offset": 2048, 00:08:54.782 "data_size": 63488 00:08:54.782 } 00:08:54.782 ] 00:08:54.782 }' 00:08:54.782 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.782 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.041 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:55.041 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.041 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.041 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.301 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.301 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:55.301 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.302 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.302 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:55.302 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.302 01:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e1d65e15-4a7b-49f7-9706-570cdb4c5f00 00:08:55.302 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.302 01:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 [2024-10-09 01:28:54.012648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:55.302 [2024-10-09 01:28:54.012831] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:55.302 [2024-10-09 01:28:54.012845] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:55.302 [2024-10-09 01:28:54.013113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:55.302 [2024-10-09 01:28:54.013238] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:55.302 [2024-10-09 01:28:54.013256] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:55.302 NewBaseBdev 00:08:55.302 [2024-10-09 01:28:54.013361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 [ 00:08:55.302 { 00:08:55.302 "name": "NewBaseBdev", 00:08:55.302 "aliases": [ 00:08:55.302 "e1d65e15-4a7b-49f7-9706-570cdb4c5f00" 00:08:55.302 ], 00:08:55.302 "product_name": "Malloc disk", 00:08:55.302 "block_size": 512, 00:08:55.302 "num_blocks": 65536, 00:08:55.302 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:55.302 "assigned_rate_limits": { 00:08:55.302 "rw_ios_per_sec": 0, 00:08:55.302 "rw_mbytes_per_sec": 0, 00:08:55.302 "r_mbytes_per_sec": 0, 00:08:55.302 "w_mbytes_per_sec": 0 00:08:55.302 }, 00:08:55.302 "claimed": true, 00:08:55.302 "claim_type": "exclusive_write", 00:08:55.302 "zoned": false, 00:08:55.302 "supported_io_types": { 00:08:55.302 "read": true, 00:08:55.302 "write": true, 00:08:55.302 "unmap": true, 00:08:55.302 "flush": true, 00:08:55.302 "reset": true, 00:08:55.302 "nvme_admin": false, 00:08:55.302 "nvme_io": false, 00:08:55.302 "nvme_io_md": false, 00:08:55.302 "write_zeroes": true, 00:08:55.302 "zcopy": true, 00:08:55.302 "get_zone_info": false, 00:08:55.302 "zone_management": false, 00:08:55.302 "zone_append": false, 00:08:55.302 "compare": false, 00:08:55.302 "compare_and_write": false, 00:08:55.302 "abort": true, 00:08:55.302 "seek_hole": false, 00:08:55.302 "seek_data": false, 00:08:55.302 "copy": true, 00:08:55.302 "nvme_iov_md": false 00:08:55.302 }, 00:08:55.302 "memory_domains": [ 00:08:55.302 { 00:08:55.302 "dma_device_id": "system", 00:08:55.302 "dma_device_type": 1 00:08:55.302 }, 00:08:55.302 { 00:08:55.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.302 "dma_device_type": 2 00:08:55.302 } 00:08:55.302 ], 00:08:55.302 "driver_specific": {} 00:08:55.302 } 00:08:55.302 ] 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.302 "name": "Existed_Raid", 00:08:55.302 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:55.302 "strip_size_kb": 64, 00:08:55.302 "state": "online", 00:08:55.302 "raid_level": "concat", 00:08:55.302 "superblock": true, 00:08:55.302 "num_base_bdevs": 3, 00:08:55.302 "num_base_bdevs_discovered": 3, 00:08:55.302 "num_base_bdevs_operational": 3, 00:08:55.302 "base_bdevs_list": [ 00:08:55.302 { 00:08:55.302 "name": "NewBaseBdev", 00:08:55.302 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:55.302 "is_configured": true, 00:08:55.302 "data_offset": 2048, 00:08:55.302 "data_size": 63488 00:08:55.302 }, 00:08:55.302 { 00:08:55.302 "name": "BaseBdev2", 00:08:55.302 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:55.302 "is_configured": true, 00:08:55.302 "data_offset": 2048, 00:08:55.302 "data_size": 63488 00:08:55.302 }, 00:08:55.302 { 00:08:55.302 "name": "BaseBdev3", 00:08:55.302 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:55.302 "is_configured": true, 00:08:55.302 "data_offset": 2048, 00:08:55.302 "data_size": 63488 00:08:55.302 } 00:08:55.302 ] 00:08:55.302 }' 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.302 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.872 [2024-10-09 01:28:54.469051] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.872 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.872 "name": "Existed_Raid", 00:08:55.872 "aliases": [ 00:08:55.872 "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1" 00:08:55.872 ], 00:08:55.872 "product_name": "Raid Volume", 00:08:55.872 "block_size": 512, 00:08:55.872 "num_blocks": 190464, 00:08:55.872 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:55.872 "assigned_rate_limits": { 00:08:55.872 "rw_ios_per_sec": 0, 00:08:55.872 "rw_mbytes_per_sec": 0, 00:08:55.872 "r_mbytes_per_sec": 0, 00:08:55.872 "w_mbytes_per_sec": 0 00:08:55.872 }, 00:08:55.872 "claimed": false, 00:08:55.872 "zoned": false, 00:08:55.872 "supported_io_types": { 00:08:55.872 "read": true, 00:08:55.872 "write": true, 00:08:55.872 "unmap": true, 00:08:55.872 "flush": true, 00:08:55.872 "reset": true, 00:08:55.872 "nvme_admin": false, 00:08:55.872 "nvme_io": false, 00:08:55.872 "nvme_io_md": false, 00:08:55.872 "write_zeroes": true, 00:08:55.872 "zcopy": false, 00:08:55.872 "get_zone_info": false, 00:08:55.872 "zone_management": false, 00:08:55.872 "zone_append": false, 00:08:55.872 "compare": false, 00:08:55.872 "compare_and_write": false, 00:08:55.872 "abort": false, 00:08:55.872 "seek_hole": false, 00:08:55.872 "seek_data": false, 00:08:55.872 "copy": false, 00:08:55.872 "nvme_iov_md": false 00:08:55.872 }, 00:08:55.872 "memory_domains": [ 00:08:55.872 { 00:08:55.872 "dma_device_id": "system", 00:08:55.872 "dma_device_type": 1 00:08:55.872 }, 00:08:55.872 { 00:08:55.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.872 "dma_device_type": 2 00:08:55.872 }, 00:08:55.872 { 00:08:55.872 "dma_device_id": "system", 00:08:55.872 "dma_device_type": 1 00:08:55.872 }, 00:08:55.872 { 00:08:55.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.872 "dma_device_type": 2 00:08:55.872 }, 00:08:55.872 { 00:08:55.872 "dma_device_id": "system", 00:08:55.872 "dma_device_type": 1 00:08:55.873 }, 00:08:55.873 { 00:08:55.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.873 "dma_device_type": 2 00:08:55.873 } 00:08:55.873 ], 00:08:55.873 "driver_specific": { 00:08:55.873 "raid": { 00:08:55.873 "uuid": "f1ec6574-b7b3-4723-acc2-ec7eb2d74be1", 00:08:55.873 "strip_size_kb": 64, 00:08:55.873 "state": "online", 00:08:55.873 "raid_level": "concat", 00:08:55.873 "superblock": true, 00:08:55.873 "num_base_bdevs": 3, 00:08:55.873 "num_base_bdevs_discovered": 3, 00:08:55.873 "num_base_bdevs_operational": 3, 00:08:55.873 "base_bdevs_list": [ 00:08:55.873 { 00:08:55.873 "name": "NewBaseBdev", 00:08:55.873 "uuid": "e1d65e15-4a7b-49f7-9706-570cdb4c5f00", 00:08:55.873 "is_configured": true, 00:08:55.873 "data_offset": 2048, 00:08:55.873 "data_size": 63488 00:08:55.873 }, 00:08:55.873 { 00:08:55.873 "name": "BaseBdev2", 00:08:55.873 "uuid": "fa874a64-397b-4fb1-a537-6009a2745b5b", 00:08:55.873 "is_configured": true, 00:08:55.873 "data_offset": 2048, 00:08:55.873 "data_size": 63488 00:08:55.873 }, 00:08:55.873 { 00:08:55.873 "name": "BaseBdev3", 00:08:55.873 "uuid": "3dfb2a41-2784-4ea1-9005-53fc2af0b04a", 00:08:55.873 "is_configured": true, 00:08:55.873 "data_offset": 2048, 00:08:55.873 "data_size": 63488 00:08:55.873 } 00:08:55.873 ] 00:08:55.873 } 00:08:55.873 } 00:08:55.873 }' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:55.873 BaseBdev2 00:08:55.873 BaseBdev3' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.873 [2024-10-09 01:28:54.704850] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.873 [2024-10-09 01:28:54.704880] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.873 [2024-10-09 01:28:54.704943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.873 [2024-10-09 01:28:54.705006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.873 [2024-10-09 01:28:54.705021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78455 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78455 ']' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 78455 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78455 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.873 killing process with pid 78455 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78455' 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 78455 00:08:55.873 [2024-10-09 01:28:54.754403] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.873 01:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 78455 00:08:56.133 [2024-10-09 01:28:54.811089] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.393 01:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:56.393 00:08:56.393 real 0m8.780s 00:08:56.393 user 0m14.630s 00:08:56.393 sys 0m1.906s 00:08:56.393 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.393 01:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.393 ************************************ 00:08:56.393 END TEST raid_state_function_test_sb 00:08:56.393 ************************************ 00:08:56.393 01:28:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:56.393 01:28:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:56.393 01:28:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.393 01:28:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.393 ************************************ 00:08:56.393 START TEST raid_superblock_test 00:08:56.393 ************************************ 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79054 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79054 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79054 ']' 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.393 01:28:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.653 [2024-10-09 01:28:55.354277] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:56.653 [2024-10-09 01:28:55.354399] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79054 ] 00:08:56.653 [2024-10-09 01:28:55.489309] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:56.653 [2024-10-09 01:28:55.516486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.913 [2024-10-09 01:28:55.586832] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.913 [2024-10-09 01:28:55.661720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.913 [2024-10-09 01:28:55.661768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.487 malloc1 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.487 [2024-10-09 01:28:56.192914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:57.487 [2024-10-09 01:28:56.192996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.487 [2024-10-09 01:28:56.193019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:57.487 [2024-10-09 01:28:56.193031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.487 [2024-10-09 01:28:56.195482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.487 [2024-10-09 01:28:56.195516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:57.487 pt1 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.487 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.488 malloc2 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.488 [2024-10-09 01:28:56.237774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.488 [2024-10-09 01:28:56.237836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.488 [2024-10-09 01:28:56.237857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:57.488 [2024-10-09 01:28:56.237869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.488 [2024-10-09 01:28:56.240278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.488 [2024-10-09 01:28:56.240308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.488 pt2 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.488 malloc3 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.488 [2024-10-09 01:28:56.272891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:57.488 [2024-10-09 01:28:56.272936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.488 [2024-10-09 01:28:56.272956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:57.488 [2024-10-09 01:28:56.272965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.488 [2024-10-09 01:28:56.275284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.488 [2024-10-09 01:28:56.275316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:57.488 pt3 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.488 [2024-10-09 01:28:56.284974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:57.488 [2024-10-09 01:28:56.287099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.488 [2024-10-09 01:28:56.287165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:57.488 [2024-10-09 01:28:56.287308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:57.488 [2024-10-09 01:28:56.287326] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.488 [2024-10-09 01:28:56.287586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:57.488 [2024-10-09 01:28:56.287724] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:57.488 [2024-10-09 01:28:56.287738] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:57.488 [2024-10-09 01:28:56.287862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.488 "name": "raid_bdev1", 00:08:57.488 "uuid": "00e75e77-db51-4949-be58-e3208f97b98a", 00:08:57.488 "strip_size_kb": 64, 00:08:57.488 "state": "online", 00:08:57.488 "raid_level": "concat", 00:08:57.488 "superblock": true, 00:08:57.488 "num_base_bdevs": 3, 00:08:57.488 "num_base_bdevs_discovered": 3, 00:08:57.488 "num_base_bdevs_operational": 3, 00:08:57.488 "base_bdevs_list": [ 00:08:57.488 { 00:08:57.488 "name": "pt1", 00:08:57.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.488 "is_configured": true, 00:08:57.488 "data_offset": 2048, 00:08:57.488 "data_size": 63488 00:08:57.488 }, 00:08:57.488 { 00:08:57.488 "name": "pt2", 00:08:57.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.488 "is_configured": true, 00:08:57.488 "data_offset": 2048, 00:08:57.488 "data_size": 63488 00:08:57.488 }, 00:08:57.488 { 00:08:57.488 "name": "pt3", 00:08:57.488 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.488 "is_configured": true, 00:08:57.488 "data_offset": 2048, 00:08:57.488 "data_size": 63488 00:08:57.488 } 00:08:57.488 ] 00:08:57.488 }' 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.488 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.057 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:58.057 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:58.057 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.057 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.057 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.058 [2024-10-09 01:28:56.733307] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.058 "name": "raid_bdev1", 00:08:58.058 "aliases": [ 00:08:58.058 "00e75e77-db51-4949-be58-e3208f97b98a" 00:08:58.058 ], 00:08:58.058 "product_name": "Raid Volume", 00:08:58.058 "block_size": 512, 00:08:58.058 "num_blocks": 190464, 00:08:58.058 "uuid": "00e75e77-db51-4949-be58-e3208f97b98a", 00:08:58.058 "assigned_rate_limits": { 00:08:58.058 "rw_ios_per_sec": 0, 00:08:58.058 "rw_mbytes_per_sec": 0, 00:08:58.058 "r_mbytes_per_sec": 0, 00:08:58.058 "w_mbytes_per_sec": 0 00:08:58.058 }, 00:08:58.058 "claimed": false, 00:08:58.058 "zoned": false, 00:08:58.058 "supported_io_types": { 00:08:58.058 "read": true, 00:08:58.058 "write": true, 00:08:58.058 "unmap": true, 00:08:58.058 "flush": true, 00:08:58.058 "reset": true, 00:08:58.058 "nvme_admin": false, 00:08:58.058 "nvme_io": false, 00:08:58.058 "nvme_io_md": false, 00:08:58.058 "write_zeroes": true, 00:08:58.058 "zcopy": false, 00:08:58.058 "get_zone_info": false, 00:08:58.058 "zone_management": false, 00:08:58.058 "zone_append": false, 00:08:58.058 "compare": false, 00:08:58.058 "compare_and_write": false, 00:08:58.058 "abort": false, 00:08:58.058 "seek_hole": false, 00:08:58.058 "seek_data": false, 00:08:58.058 "copy": false, 00:08:58.058 "nvme_iov_md": false 00:08:58.058 }, 00:08:58.058 "memory_domains": [ 00:08:58.058 { 00:08:58.058 "dma_device_id": "system", 00:08:58.058 "dma_device_type": 1 00:08:58.058 }, 00:08:58.058 { 00:08:58.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.058 "dma_device_type": 2 00:08:58.058 }, 00:08:58.058 { 00:08:58.058 "dma_device_id": "system", 00:08:58.058 "dma_device_type": 1 00:08:58.058 }, 00:08:58.058 { 00:08:58.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.058 "dma_device_type": 2 00:08:58.058 }, 00:08:58.058 { 00:08:58.058 "dma_device_id": "system", 00:08:58.058 "dma_device_type": 1 00:08:58.058 }, 00:08:58.058 { 00:08:58.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.058 "dma_device_type": 2 00:08:58.058 } 00:08:58.058 ], 00:08:58.058 "driver_specific": { 00:08:58.058 "raid": { 00:08:58.058 "uuid": "00e75e77-db51-4949-be58-e3208f97b98a", 00:08:58.058 "strip_size_kb": 64, 00:08:58.058 "state": "online", 00:08:58.058 "raid_level": "concat", 00:08:58.058 "superblock": true, 00:08:58.058 "num_base_bdevs": 3, 00:08:58.058 "num_base_bdevs_discovered": 3, 00:08:58.058 "num_base_bdevs_operational": 3, 00:08:58.058 "base_bdevs_list": [ 00:08:58.058 { 00:08:58.058 "name": "pt1", 00:08:58.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.058 "is_configured": true, 00:08:58.058 "data_offset": 2048, 00:08:58.058 "data_size": 63488 00:08:58.058 }, 00:08:58.058 { 00:08:58.058 "name": "pt2", 00:08:58.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.058 "is_configured": true, 00:08:58.058 "data_offset": 2048, 00:08:58.058 "data_size": 63488 00:08:58.058 }, 00:08:58.058 { 00:08:58.058 "name": "pt3", 00:08:58.058 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.058 "is_configured": true, 00:08:58.058 "data_offset": 2048, 00:08:58.058 "data_size": 63488 00:08:58.058 } 00:08:58.058 ] 00:08:58.058 } 00:08:58.058 } 00:08:58.058 }' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:58.058 pt2 00:08:58.058 pt3' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.058 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.318 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.318 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.318 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.318 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.318 01:28:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:58.318 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.318 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.318 [2024-10-09 01:28:56.977314] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.318 01:28:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=00e75e77-db51-4949-be58-e3208f97b98a 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 00e75e77-db51-4949-be58-e3208f97b98a ']' 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.318 [2024-10-09 01:28:57.021084] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.318 [2024-10-09 01:28:57.021118] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.318 [2024-10-09 01:28:57.021188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.318 [2024-10-09 01:28:57.021251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.318 [2024-10-09 01:28:57.021267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.318 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.319 [2024-10-09 01:28:57.173154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:58.319 [2024-10-09 01:28:57.175285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:58.319 [2024-10-09 01:28:57.175332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:58.319 [2024-10-09 01:28:57.175374] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:58.319 [2024-10-09 01:28:57.175413] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:58.319 [2024-10-09 01:28:57.175429] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:58.319 [2024-10-09 01:28:57.175443] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.319 [2024-10-09 01:28:57.175452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:58.319 request: 00:08:58.319 { 00:08:58.319 "name": "raid_bdev1", 00:08:58.319 "raid_level": "concat", 00:08:58.319 "base_bdevs": [ 00:08:58.319 "malloc1", 00:08:58.319 "malloc2", 00:08:58.319 "malloc3" 00:08:58.319 ], 00:08:58.319 "strip_size_kb": 64, 00:08:58.319 "superblock": false, 00:08:58.319 "method": "bdev_raid_create", 00:08:58.319 "req_id": 1 00:08:58.319 } 00:08:58.319 Got JSON-RPC error response 00:08:58.319 response: 00:08:58.319 { 00:08:58.319 "code": -17, 00:08:58.319 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:58.319 } 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:58.319 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.579 [2024-10-09 01:28:57.237135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.579 [2024-10-09 01:28:57.237221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.579 [2024-10-09 01:28:57.237255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:58.579 [2024-10-09 01:28:57.237298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.579 [2024-10-09 01:28:57.239623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.579 [2024-10-09 01:28:57.239688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.579 [2024-10-09 01:28:57.239790] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:58.579 [2024-10-09 01:28:57.239867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.579 pt1 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.579 "name": "raid_bdev1", 00:08:58.579 "uuid": "00e75e77-db51-4949-be58-e3208f97b98a", 00:08:58.579 "strip_size_kb": 64, 00:08:58.579 "state": "configuring", 00:08:58.579 "raid_level": "concat", 00:08:58.579 "superblock": true, 00:08:58.579 "num_base_bdevs": 3, 00:08:58.579 "num_base_bdevs_discovered": 1, 00:08:58.579 "num_base_bdevs_operational": 3, 00:08:58.579 "base_bdevs_list": [ 00:08:58.579 { 00:08:58.579 "name": "pt1", 00:08:58.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.579 "is_configured": true, 00:08:58.579 "data_offset": 2048, 00:08:58.579 "data_size": 63488 00:08:58.579 }, 00:08:58.579 { 00:08:58.579 "name": null, 00:08:58.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.579 "is_configured": false, 00:08:58.579 "data_offset": 2048, 00:08:58.579 "data_size": 63488 00:08:58.579 }, 00:08:58.579 { 00:08:58.579 "name": null, 00:08:58.579 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.579 "is_configured": false, 00:08:58.579 "data_offset": 2048, 00:08:58.579 "data_size": 63488 00:08:58.579 } 00:08:58.579 ] 00:08:58.579 }' 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.579 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 [2024-10-09 01:28:57.645245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.839 [2024-10-09 01:28:57.645294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.839 [2024-10-09 01:28:57.645315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:58.839 [2024-10-09 01:28:57.645325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.839 [2024-10-09 01:28:57.645697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.839 [2024-10-09 01:28:57.645765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.839 [2024-10-09 01:28:57.645842] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:58.839 [2024-10-09 01:28:57.645862] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.839 pt2 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 [2024-10-09 01:28:57.657283] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.839 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.839 "name": "raid_bdev1", 00:08:58.839 "uuid": "00e75e77-db51-4949-be58-e3208f97b98a", 00:08:58.839 "strip_size_kb": 64, 00:08:58.839 "state": "configuring", 00:08:58.839 "raid_level": "concat", 00:08:58.839 "superblock": true, 00:08:58.839 "num_base_bdevs": 3, 00:08:58.839 "num_base_bdevs_discovered": 1, 00:08:58.839 "num_base_bdevs_operational": 3, 00:08:58.839 "base_bdevs_list": [ 00:08:58.839 { 00:08:58.839 "name": "pt1", 00:08:58.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.840 "is_configured": true, 00:08:58.840 "data_offset": 2048, 00:08:58.840 "data_size": 63488 00:08:58.840 }, 00:08:58.840 { 00:08:58.840 "name": null, 00:08:58.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.840 "is_configured": false, 00:08:58.840 "data_offset": 0, 00:08:58.840 "data_size": 63488 00:08:58.840 }, 00:08:58.840 { 00:08:58.840 "name": null, 00:08:58.840 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.840 "is_configured": false, 00:08:58.840 "data_offset": 2048, 00:08:58.840 "data_size": 63488 00:08:58.840 } 00:08:58.840 ] 00:08:58.840 }' 00:08:58.840 01:28:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.840 01:28:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.409 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:59.409 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.409 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.409 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.409 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.409 [2024-10-09 01:28:58.065352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.409 [2024-10-09 01:28:58.065455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.409 [2024-10-09 01:28:58.065488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:59.409 [2024-10-09 01:28:58.065530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.409 [2024-10-09 01:28:58.065927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.409 [2024-10-09 01:28:58.065994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.409 [2024-10-09 01:28:58.066081] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.410 [2024-10-09 01:28:58.066130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.410 pt2 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.410 [2024-10-09 01:28:58.077370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:59.410 [2024-10-09 01:28:58.077455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.410 [2024-10-09 01:28:58.077485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:59.410 [2024-10-09 01:28:58.077515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.410 [2024-10-09 01:28:58.077892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.410 [2024-10-09 01:28:58.077958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:59.410 [2024-10-09 01:28:58.078035] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:59.410 [2024-10-09 01:28:58.078097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.410 [2024-10-09 01:28:58.078207] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.410 [2024-10-09 01:28:58.078247] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.410 [2024-10-09 01:28:58.078504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:59.410 [2024-10-09 01:28:58.078674] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.410 [2024-10-09 01:28:58.078711] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.410 [2024-10-09 01:28:58.078853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.410 pt3 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.410 "name": "raid_bdev1", 00:08:59.410 "uuid": "00e75e77-db51-4949-be58-e3208f97b98a", 00:08:59.410 "strip_size_kb": 64, 00:08:59.410 "state": "online", 00:08:59.410 "raid_level": "concat", 00:08:59.410 "superblock": true, 00:08:59.410 "num_base_bdevs": 3, 00:08:59.410 "num_base_bdevs_discovered": 3, 00:08:59.410 "num_base_bdevs_operational": 3, 00:08:59.410 "base_bdevs_list": [ 00:08:59.410 { 00:08:59.410 "name": "pt1", 00:08:59.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.410 "is_configured": true, 00:08:59.410 "data_offset": 2048, 00:08:59.410 "data_size": 63488 00:08:59.410 }, 00:08:59.410 { 00:08:59.410 "name": "pt2", 00:08:59.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.410 "is_configured": true, 00:08:59.410 "data_offset": 2048, 00:08:59.410 "data_size": 63488 00:08:59.410 }, 00:08:59.410 { 00:08:59.410 "name": "pt3", 00:08:59.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.410 "is_configured": true, 00:08:59.410 "data_offset": 2048, 00:08:59.410 "data_size": 63488 00:08:59.410 } 00:08:59.410 ] 00:08:59.410 }' 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.410 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.670 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:59.670 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:59.670 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.670 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.670 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.670 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.930 [2024-10-09 01:28:58.569829] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.930 "name": "raid_bdev1", 00:08:59.930 "aliases": [ 00:08:59.930 "00e75e77-db51-4949-be58-e3208f97b98a" 00:08:59.930 ], 00:08:59.930 "product_name": "Raid Volume", 00:08:59.930 "block_size": 512, 00:08:59.930 "num_blocks": 190464, 00:08:59.930 "uuid": "00e75e77-db51-4949-be58-e3208f97b98a", 00:08:59.930 "assigned_rate_limits": { 00:08:59.930 "rw_ios_per_sec": 0, 00:08:59.930 "rw_mbytes_per_sec": 0, 00:08:59.930 "r_mbytes_per_sec": 0, 00:08:59.930 "w_mbytes_per_sec": 0 00:08:59.930 }, 00:08:59.930 "claimed": false, 00:08:59.930 "zoned": false, 00:08:59.930 "supported_io_types": { 00:08:59.930 "read": true, 00:08:59.930 "write": true, 00:08:59.930 "unmap": true, 00:08:59.930 "flush": true, 00:08:59.930 "reset": true, 00:08:59.930 "nvme_admin": false, 00:08:59.930 "nvme_io": false, 00:08:59.930 "nvme_io_md": false, 00:08:59.930 "write_zeroes": true, 00:08:59.930 "zcopy": false, 00:08:59.930 "get_zone_info": false, 00:08:59.930 "zone_management": false, 00:08:59.930 "zone_append": false, 00:08:59.930 "compare": false, 00:08:59.930 "compare_and_write": false, 00:08:59.930 "abort": false, 00:08:59.930 "seek_hole": false, 00:08:59.930 "seek_data": false, 00:08:59.930 "copy": false, 00:08:59.930 "nvme_iov_md": false 00:08:59.930 }, 00:08:59.930 "memory_domains": [ 00:08:59.930 { 00:08:59.930 "dma_device_id": "system", 00:08:59.930 "dma_device_type": 1 00:08:59.930 }, 00:08:59.930 { 00:08:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.930 "dma_device_type": 2 00:08:59.930 }, 00:08:59.930 { 00:08:59.930 "dma_device_id": "system", 00:08:59.930 "dma_device_type": 1 00:08:59.930 }, 00:08:59.930 { 00:08:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.930 "dma_device_type": 2 00:08:59.930 }, 00:08:59.930 { 00:08:59.930 "dma_device_id": "system", 00:08:59.930 "dma_device_type": 1 00:08:59.930 }, 00:08:59.930 { 00:08:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.930 "dma_device_type": 2 00:08:59.930 } 00:08:59.930 ], 00:08:59.930 "driver_specific": { 00:08:59.930 "raid": { 00:08:59.930 "uuid": "00e75e77-db51-4949-be58-e3208f97b98a", 00:08:59.930 "strip_size_kb": 64, 00:08:59.930 "state": "online", 00:08:59.930 "raid_level": "concat", 00:08:59.930 "superblock": true, 00:08:59.930 "num_base_bdevs": 3, 00:08:59.930 "num_base_bdevs_discovered": 3, 00:08:59.930 "num_base_bdevs_operational": 3, 00:08:59.930 "base_bdevs_list": [ 00:08:59.930 { 00:08:59.930 "name": "pt1", 00:08:59.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.930 "is_configured": true, 00:08:59.930 "data_offset": 2048, 00:08:59.930 "data_size": 63488 00:08:59.930 }, 00:08:59.930 { 00:08:59.930 "name": "pt2", 00:08:59.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.930 "is_configured": true, 00:08:59.930 "data_offset": 2048, 00:08:59.930 "data_size": 63488 00:08:59.930 }, 00:08:59.930 { 00:08:59.930 "name": "pt3", 00:08:59.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.930 "is_configured": true, 00:08:59.930 "data_offset": 2048, 00:08:59.930 "data_size": 63488 00:08:59.930 } 00:08:59.930 ] 00:08:59.930 } 00:08:59.930 } 00:08:59.930 }' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:59.930 pt2 00:08:59.930 pt3' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.930 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.200 [2024-10-09 01:28:58.841815] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 00e75e77-db51-4949-be58-e3208f97b98a '!=' 00e75e77-db51-4949-be58-e3208f97b98a ']' 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79054 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79054 ']' 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79054 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79054 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79054' 00:09:00.200 killing process with pid 79054 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79054 00:09:00.200 [2024-10-09 01:28:58.925103] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.200 [2024-10-09 01:28:58.925243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.200 01:28:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79054 00:09:00.200 [2024-10-09 01:28:58.925328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.200 [2024-10-09 01:28:58.925345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:00.200 [2024-10-09 01:28:58.984595] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.472 01:28:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:00.472 00:09:00.472 real 0m4.097s 00:09:00.472 user 0m6.244s 00:09:00.472 sys 0m0.986s 00:09:00.472 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.472 01:28:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.472 ************************************ 00:09:00.472 END TEST raid_superblock_test 00:09:00.472 ************************************ 00:09:00.732 01:28:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:00.732 01:28:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.732 01:28:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.732 01:28:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.732 ************************************ 00:09:00.732 START TEST raid_read_error_test 00:09:00.732 ************************************ 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:00.732 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JnUEyg1th9 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79301 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79301 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 79301 ']' 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.733 01:28:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.733 [2024-10-09 01:28:59.537070] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:00.733 [2024-10-09 01:28:59.537286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79301 ] 00:09:00.992 [2024-10-09 01:28:59.668712] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:00.992 [2024-10-09 01:28:59.681984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.992 [2024-10-09 01:28:59.752604] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.992 [2024-10-09 01:28:59.828563] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.992 [2024-10-09 01:28:59.828603] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 BaseBdev1_malloc 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 true 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 [2024-10-09 01:29:00.391531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:01.561 [2024-10-09 01:29:00.391705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.561 [2024-10-09 01:29:00.391732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:01.561 [2024-10-09 01:29:00.391749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.561 [2024-10-09 01:29:00.394121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.561 [2024-10-09 01:29:00.394159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:01.561 BaseBdev1 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 BaseBdev2_malloc 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 true 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.561 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.561 [2024-10-09 01:29:00.448761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:01.562 [2024-10-09 01:29:00.448814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.562 [2024-10-09 01:29:00.448829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:01.562 [2024-10-09 01:29:00.448840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.562 [2024-10-09 01:29:00.451167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.562 [2024-10-09 01:29:00.451302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:01.562 BaseBdev2 00:09:01.821 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.821 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:01.821 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:01.821 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.821 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.821 BaseBdev3_malloc 00:09:01.821 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.822 true 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.822 [2024-10-09 01:29:00.495213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:01.822 [2024-10-09 01:29:00.495263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.822 [2024-10-09 01:29:00.495279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:01.822 [2024-10-09 01:29:00.495290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.822 [2024-10-09 01:29:00.497587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.822 [2024-10-09 01:29:00.497696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:01.822 BaseBdev3 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.822 [2024-10-09 01:29:00.507295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.822 [2024-10-09 01:29:00.509365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.822 [2024-10-09 01:29:00.509443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.822 [2024-10-09 01:29:00.509635] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:01.822 [2024-10-09 01:29:00.509647] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:01.822 [2024-10-09 01:29:00.509900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:01.822 [2024-10-09 01:29:00.510033] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:01.822 [2024-10-09 01:29:00.510047] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:01.822 [2024-10-09 01:29:00.510163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.822 "name": "raid_bdev1", 00:09:01.822 "uuid": "d6b6aa4c-d5da-4fb4-b4fa-df129d924b0b", 00:09:01.822 "strip_size_kb": 64, 00:09:01.822 "state": "online", 00:09:01.822 "raid_level": "concat", 00:09:01.822 "superblock": true, 00:09:01.822 "num_base_bdevs": 3, 00:09:01.822 "num_base_bdevs_discovered": 3, 00:09:01.822 "num_base_bdevs_operational": 3, 00:09:01.822 "base_bdevs_list": [ 00:09:01.822 { 00:09:01.822 "name": "BaseBdev1", 00:09:01.822 "uuid": "f2124418-076d-558b-b207-c8d550b92afd", 00:09:01.822 "is_configured": true, 00:09:01.822 "data_offset": 2048, 00:09:01.822 "data_size": 63488 00:09:01.822 }, 00:09:01.822 { 00:09:01.822 "name": "BaseBdev2", 00:09:01.822 "uuid": "2d034b9d-357d-52da-8b79-d31e1b8ede0d", 00:09:01.822 "is_configured": true, 00:09:01.822 "data_offset": 2048, 00:09:01.822 "data_size": 63488 00:09:01.822 }, 00:09:01.822 { 00:09:01.822 "name": "BaseBdev3", 00:09:01.822 "uuid": "961eb8c7-94b9-5303-b34a-9fd7e6f791ab", 00:09:01.822 "is_configured": true, 00:09:01.822 "data_offset": 2048, 00:09:01.822 "data_size": 63488 00:09:01.822 } 00:09:01.822 ] 00:09:01.822 }' 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.822 01:29:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.081 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.082 01:29:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:02.082 [2024-10-09 01:29:00.959852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.021 01:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.281 01:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.281 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.281 "name": "raid_bdev1", 00:09:03.281 "uuid": "d6b6aa4c-d5da-4fb4-b4fa-df129d924b0b", 00:09:03.281 "strip_size_kb": 64, 00:09:03.281 "state": "online", 00:09:03.281 "raid_level": "concat", 00:09:03.281 "superblock": true, 00:09:03.281 "num_base_bdevs": 3, 00:09:03.281 "num_base_bdevs_discovered": 3, 00:09:03.281 "num_base_bdevs_operational": 3, 00:09:03.281 "base_bdevs_list": [ 00:09:03.281 { 00:09:03.281 "name": "BaseBdev1", 00:09:03.281 "uuid": "f2124418-076d-558b-b207-c8d550b92afd", 00:09:03.281 "is_configured": true, 00:09:03.281 "data_offset": 2048, 00:09:03.281 "data_size": 63488 00:09:03.281 }, 00:09:03.281 { 00:09:03.281 "name": "BaseBdev2", 00:09:03.281 "uuid": "2d034b9d-357d-52da-8b79-d31e1b8ede0d", 00:09:03.281 "is_configured": true, 00:09:03.281 "data_offset": 2048, 00:09:03.281 "data_size": 63488 00:09:03.281 }, 00:09:03.281 { 00:09:03.281 "name": "BaseBdev3", 00:09:03.281 "uuid": "961eb8c7-94b9-5303-b34a-9fd7e6f791ab", 00:09:03.281 "is_configured": true, 00:09:03.281 "data_offset": 2048, 00:09:03.281 "data_size": 63488 00:09:03.281 } 00:09:03.281 ] 00:09:03.281 }' 00:09:03.281 01:29:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.281 01:29:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.542 [2024-10-09 01:29:02.371126] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.542 [2024-10-09 01:29:02.371181] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.542 [2024-10-09 01:29:02.373698] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.542 [2024-10-09 01:29:02.373777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.542 [2024-10-09 01:29:02.373856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.542 [2024-10-09 01:29:02.373912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:03.542 { 00:09:03.542 "results": [ 00:09:03.542 { 00:09:03.542 "job": "raid_bdev1", 00:09:03.542 "core_mask": "0x1", 00:09:03.542 "workload": "randrw", 00:09:03.542 "percentage": 50, 00:09:03.542 "status": "finished", 00:09:03.542 "queue_depth": 1, 00:09:03.542 "io_size": 131072, 00:09:03.542 "runtime": 1.409144, 00:09:03.542 "iops": 15385.226775971796, 00:09:03.542 "mibps": 1923.1533469964745, 00:09:03.542 "io_failed": 1, 00:09:03.542 "io_timeout": 0, 00:09:03.542 "avg_latency_us": 91.16150393875704, 00:09:03.542 "min_latency_us": 24.656149219907608, 00:09:03.542 "max_latency_us": 1299.5241000610129 00:09:03.542 } 00:09:03.542 ], 00:09:03.542 "core_count": 1 00:09:03.542 } 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79301 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 79301 ']' 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 79301 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79301 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79301' 00:09:03.542 killing process with pid 79301 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 79301 00:09:03.542 [2024-10-09 01:29:02.416768] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.542 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 79301 00:09:03.802 [2024-10-09 01:29:02.462159] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JnUEyg1th9 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:04.062 ************************************ 00:09:04.062 END TEST raid_read_error_test 00:09:04.062 ************************************ 00:09:04.062 00:09:04.062 real 0m3.410s 00:09:04.062 user 0m4.119s 00:09:04.062 sys 0m0.637s 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.062 01:29:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.062 01:29:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:04.062 01:29:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:04.062 01:29:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.062 01:29:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.062 ************************************ 00:09:04.062 START TEST raid_write_error_test 00:09:04.062 ************************************ 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SqrLC2yGnA 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79430 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79430 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 79430 ']' 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.062 01:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.323 [2024-10-09 01:29:03.020323] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:04.323 [2024-10-09 01:29:03.020505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79430 ] 00:09:04.323 [2024-10-09 01:29:03.152269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.323 [2024-10-09 01:29:03.180408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.583 [2024-10-09 01:29:03.249542] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.583 [2024-10-09 01:29:03.325282] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.583 [2024-10-09 01:29:03.325444] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 BaseBdev1_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 true 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 [2024-10-09 01:29:03.876380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:05.152 [2024-10-09 01:29:03.876463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.152 [2024-10-09 01:29:03.876486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:05.152 [2024-10-09 01:29:03.876502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.152 [2024-10-09 01:29:03.878859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.152 [2024-10-09 01:29:03.878893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:05.152 BaseBdev1 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 BaseBdev2_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 true 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 [2024-10-09 01:29:03.938739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:05.152 [2024-10-09 01:29:03.938814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.152 [2024-10-09 01:29:03.938838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:05.152 [2024-10-09 01:29:03.938855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.152 [2024-10-09 01:29:03.941872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.152 [2024-10-09 01:29:03.941913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:05.152 BaseBdev2 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 BaseBdev3_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 true 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 [2024-10-09 01:29:03.985143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:05.152 [2024-10-09 01:29:03.985194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.152 [2024-10-09 01:29:03.985210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:05.152 [2024-10-09 01:29:03.985221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.152 [2024-10-09 01:29:03.987584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.152 [2024-10-09 01:29:03.987714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:05.152 BaseBdev3 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 [2024-10-09 01:29:03.997227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.152 [2024-10-09 01:29:03.999411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.152 [2024-10-09 01:29:03.999550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.152 [2024-10-09 01:29:03.999782] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.152 [2024-10-09 01:29:03.999827] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.152 [2024-10-09 01:29:04.000105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:05.152 [2024-10-09 01:29:04.000277] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.152 [2024-10-09 01:29:04.000321] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:05.152 [2024-10-09 01:29:04.000491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.152 01:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.412 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.412 "name": "raid_bdev1", 00:09:05.412 "uuid": "272ce819-2ab8-448f-b254-f9f8c57fa79c", 00:09:05.412 "strip_size_kb": 64, 00:09:05.412 "state": "online", 00:09:05.412 "raid_level": "concat", 00:09:05.412 "superblock": true, 00:09:05.412 "num_base_bdevs": 3, 00:09:05.412 "num_base_bdevs_discovered": 3, 00:09:05.412 "num_base_bdevs_operational": 3, 00:09:05.412 "base_bdevs_list": [ 00:09:05.412 { 00:09:05.412 "name": "BaseBdev1", 00:09:05.412 "uuid": "34f719b6-70bd-57df-bba6-c57655d61e42", 00:09:05.412 "is_configured": true, 00:09:05.412 "data_offset": 2048, 00:09:05.412 "data_size": 63488 00:09:05.412 }, 00:09:05.412 { 00:09:05.412 "name": "BaseBdev2", 00:09:05.412 "uuid": "b58c0c25-e0b0-5d31-8722-6a54acbcceb7", 00:09:05.412 "is_configured": true, 00:09:05.412 "data_offset": 2048, 00:09:05.412 "data_size": 63488 00:09:05.412 }, 00:09:05.412 { 00:09:05.412 "name": "BaseBdev3", 00:09:05.412 "uuid": "9b4e44d5-4b2b-55d8-a538-66adafb19b6f", 00:09:05.412 "is_configured": true, 00:09:05.412 "data_offset": 2048, 00:09:05.412 "data_size": 63488 00:09:05.412 } 00:09:05.412 ] 00:09:05.412 }' 00:09:05.412 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.412 01:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.672 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:05.672 01:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:05.672 [2024-10-09 01:29:04.477872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.612 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.613 "name": "raid_bdev1", 00:09:06.613 "uuid": "272ce819-2ab8-448f-b254-f9f8c57fa79c", 00:09:06.613 "strip_size_kb": 64, 00:09:06.613 "state": "online", 00:09:06.613 "raid_level": "concat", 00:09:06.613 "superblock": true, 00:09:06.613 "num_base_bdevs": 3, 00:09:06.613 "num_base_bdevs_discovered": 3, 00:09:06.613 "num_base_bdevs_operational": 3, 00:09:06.613 "base_bdevs_list": [ 00:09:06.613 { 00:09:06.613 "name": "BaseBdev1", 00:09:06.613 "uuid": "34f719b6-70bd-57df-bba6-c57655d61e42", 00:09:06.613 "is_configured": true, 00:09:06.613 "data_offset": 2048, 00:09:06.613 "data_size": 63488 00:09:06.613 }, 00:09:06.613 { 00:09:06.613 "name": "BaseBdev2", 00:09:06.613 "uuid": "b58c0c25-e0b0-5d31-8722-6a54acbcceb7", 00:09:06.613 "is_configured": true, 00:09:06.613 "data_offset": 2048, 00:09:06.613 "data_size": 63488 00:09:06.613 }, 00:09:06.613 { 00:09:06.613 "name": "BaseBdev3", 00:09:06.613 "uuid": "9b4e44d5-4b2b-55d8-a538-66adafb19b6f", 00:09:06.613 "is_configured": true, 00:09:06.613 "data_offset": 2048, 00:09:06.613 "data_size": 63488 00:09:06.613 } 00:09:06.613 ] 00:09:06.613 }' 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.613 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.192 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.192 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.192 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.192 [2024-10-09 01:29:05.837626] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.193 [2024-10-09 01:29:05.837767] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.193 [2024-10-09 01:29:05.840388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.193 [2024-10-09 01:29:05.840446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.193 [2024-10-09 01:29:05.840487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.193 [2024-10-09 01:29:05.840496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:07.193 { 00:09:07.193 "results": [ 00:09:07.193 { 00:09:07.193 "job": "raid_bdev1", 00:09:07.193 "core_mask": "0x1", 00:09:07.193 "workload": "randrw", 00:09:07.193 "percentage": 50, 00:09:07.193 "status": "finished", 00:09:07.193 "queue_depth": 1, 00:09:07.193 "io_size": 131072, 00:09:07.193 "runtime": 1.357676, 00:09:07.193 "iops": 15268.738638673734, 00:09:07.193 "mibps": 1908.5923298342168, 00:09:07.193 "io_failed": 1, 00:09:07.193 "io_timeout": 0, 00:09:07.193 "avg_latency_us": 91.75708142809577, 00:09:07.193 "min_latency_us": 24.20988407565589, 00:09:07.193 "max_latency_us": 1363.7862808332607 00:09:07.193 } 00:09:07.193 ], 00:09:07.193 "core_count": 1 00:09:07.193 } 00:09:07.193 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.193 01:29:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79430 00:09:07.193 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 79430 ']' 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 79430 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79430 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79430' 00:09:07.194 killing process with pid 79430 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 79430 00:09:07.194 [2024-10-09 01:29:05.877903] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.194 01:29:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 79430 00:09:07.194 [2024-10-09 01:29:05.922903] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SqrLC2yGnA 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:07.459 ************************************ 00:09:07.459 END TEST raid_write_error_test 00:09:07.459 ************************************ 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:07.459 00:09:07.459 real 0m3.382s 00:09:07.459 user 0m4.068s 00:09:07.459 sys 0m0.624s 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.459 01:29:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.719 01:29:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:07.719 01:29:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:07.719 01:29:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:07.719 01:29:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.719 01:29:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.719 ************************************ 00:09:07.719 START TEST raid_state_function_test 00:09:07.719 ************************************ 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79562 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79562' 00:09:07.719 Process raid pid: 79562 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79562 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79562 ']' 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.719 01:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.719 [2024-10-09 01:29:06.465936] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:07.719 [2024-10-09 01:29:06.466150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.719 [2024-10-09 01:29:06.598343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:07.978 [2024-10-09 01:29:06.627090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.978 [2024-10-09 01:29:06.695547] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.978 [2024-10-09 01:29:06.770702] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.978 [2024-10-09 01:29:06.770745] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.547 [2024-10-09 01:29:07.286491] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.547 [2024-10-09 01:29:07.286563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.547 [2024-10-09 01:29:07.286583] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.547 [2024-10-09 01:29:07.286591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.547 [2024-10-09 01:29:07.286602] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.547 [2024-10-09 01:29:07.286609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.547 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.547 "name": "Existed_Raid", 00:09:08.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.547 "strip_size_kb": 0, 00:09:08.547 "state": "configuring", 00:09:08.547 "raid_level": "raid1", 00:09:08.547 "superblock": false, 00:09:08.547 "num_base_bdevs": 3, 00:09:08.547 "num_base_bdevs_discovered": 0, 00:09:08.547 "num_base_bdevs_operational": 3, 00:09:08.547 "base_bdevs_list": [ 00:09:08.547 { 00:09:08.547 "name": "BaseBdev1", 00:09:08.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.548 "is_configured": false, 00:09:08.548 "data_offset": 0, 00:09:08.548 "data_size": 0 00:09:08.548 }, 00:09:08.548 { 00:09:08.548 "name": "BaseBdev2", 00:09:08.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.548 "is_configured": false, 00:09:08.548 "data_offset": 0, 00:09:08.548 "data_size": 0 00:09:08.548 }, 00:09:08.548 { 00:09:08.548 "name": "BaseBdev3", 00:09:08.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.548 "is_configured": false, 00:09:08.548 "data_offset": 0, 00:09:08.548 "data_size": 0 00:09:08.548 } 00:09:08.548 ] 00:09:08.548 }' 00:09:08.548 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.548 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 [2024-10-09 01:29:07.726492] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.121 [2024-10-09 01:29:07.726600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 [2024-10-09 01:29:07.738505] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.121 [2024-10-09 01:29:07.738590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.121 [2024-10-09 01:29:07.738619] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.121 [2024-10-09 01:29:07.738639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.121 [2024-10-09 01:29:07.738659] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.121 [2024-10-09 01:29:07.738676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 [2024-10-09 01:29:07.765305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.121 BaseBdev1 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 [ 00:09:09.121 { 00:09:09.121 "name": "BaseBdev1", 00:09:09.121 "aliases": [ 00:09:09.121 "9b96827a-4209-4368-a317-a937598f8cb0" 00:09:09.121 ], 00:09:09.121 "product_name": "Malloc disk", 00:09:09.121 "block_size": 512, 00:09:09.121 "num_blocks": 65536, 00:09:09.121 "uuid": "9b96827a-4209-4368-a317-a937598f8cb0", 00:09:09.121 "assigned_rate_limits": { 00:09:09.121 "rw_ios_per_sec": 0, 00:09:09.121 "rw_mbytes_per_sec": 0, 00:09:09.121 "r_mbytes_per_sec": 0, 00:09:09.121 "w_mbytes_per_sec": 0 00:09:09.121 }, 00:09:09.121 "claimed": true, 00:09:09.121 "claim_type": "exclusive_write", 00:09:09.121 "zoned": false, 00:09:09.121 "supported_io_types": { 00:09:09.121 "read": true, 00:09:09.121 "write": true, 00:09:09.121 "unmap": true, 00:09:09.121 "flush": true, 00:09:09.121 "reset": true, 00:09:09.121 "nvme_admin": false, 00:09:09.121 "nvme_io": false, 00:09:09.121 "nvme_io_md": false, 00:09:09.121 "write_zeroes": true, 00:09:09.121 "zcopy": true, 00:09:09.121 "get_zone_info": false, 00:09:09.121 "zone_management": false, 00:09:09.121 "zone_append": false, 00:09:09.121 "compare": false, 00:09:09.121 "compare_and_write": false, 00:09:09.121 "abort": true, 00:09:09.121 "seek_hole": false, 00:09:09.121 "seek_data": false, 00:09:09.121 "copy": true, 00:09:09.121 "nvme_iov_md": false 00:09:09.121 }, 00:09:09.121 "memory_domains": [ 00:09:09.121 { 00:09:09.121 "dma_device_id": "system", 00:09:09.121 "dma_device_type": 1 00:09:09.121 }, 00:09:09.121 { 00:09:09.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.121 "dma_device_type": 2 00:09:09.121 } 00:09:09.121 ], 00:09:09.121 "driver_specific": {} 00:09:09.121 } 00:09:09.121 ] 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.121 "name": "Existed_Raid", 00:09:09.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.121 "strip_size_kb": 0, 00:09:09.121 "state": "configuring", 00:09:09.121 "raid_level": "raid1", 00:09:09.121 "superblock": false, 00:09:09.121 "num_base_bdevs": 3, 00:09:09.121 "num_base_bdevs_discovered": 1, 00:09:09.121 "num_base_bdevs_operational": 3, 00:09:09.121 "base_bdevs_list": [ 00:09:09.121 { 00:09:09.121 "name": "BaseBdev1", 00:09:09.121 "uuid": "9b96827a-4209-4368-a317-a937598f8cb0", 00:09:09.121 "is_configured": true, 00:09:09.121 "data_offset": 0, 00:09:09.121 "data_size": 65536 00:09:09.121 }, 00:09:09.121 { 00:09:09.121 "name": "BaseBdev2", 00:09:09.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.121 "is_configured": false, 00:09:09.121 "data_offset": 0, 00:09:09.121 "data_size": 0 00:09:09.121 }, 00:09:09.121 { 00:09:09.121 "name": "BaseBdev3", 00:09:09.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.121 "is_configured": false, 00:09:09.121 "data_offset": 0, 00:09:09.121 "data_size": 0 00:09:09.121 } 00:09:09.121 ] 00:09:09.121 }' 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.121 01:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.386 [2024-10-09 01:29:08.237434] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.386 [2024-10-09 01:29:08.237548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.386 [2024-10-09 01:29:08.249472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.386 [2024-10-09 01:29:08.251636] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.386 [2024-10-09 01:29:08.251718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.386 [2024-10-09 01:29:08.251736] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.386 [2024-10-09 01:29:08.251744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.386 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.646 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.646 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.646 "name": "Existed_Raid", 00:09:09.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.646 "strip_size_kb": 0, 00:09:09.646 "state": "configuring", 00:09:09.646 "raid_level": "raid1", 00:09:09.646 "superblock": false, 00:09:09.646 "num_base_bdevs": 3, 00:09:09.646 "num_base_bdevs_discovered": 1, 00:09:09.646 "num_base_bdevs_operational": 3, 00:09:09.646 "base_bdevs_list": [ 00:09:09.646 { 00:09:09.646 "name": "BaseBdev1", 00:09:09.646 "uuid": "9b96827a-4209-4368-a317-a937598f8cb0", 00:09:09.646 "is_configured": true, 00:09:09.646 "data_offset": 0, 00:09:09.646 "data_size": 65536 00:09:09.646 }, 00:09:09.646 { 00:09:09.646 "name": "BaseBdev2", 00:09:09.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.646 "is_configured": false, 00:09:09.646 "data_offset": 0, 00:09:09.646 "data_size": 0 00:09:09.646 }, 00:09:09.646 { 00:09:09.646 "name": "BaseBdev3", 00:09:09.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.646 "is_configured": false, 00:09:09.646 "data_offset": 0, 00:09:09.646 "data_size": 0 00:09:09.646 } 00:09:09.646 ] 00:09:09.646 }' 00:09:09.646 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.646 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.905 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.905 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.905 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.905 [2024-10-09 01:29:08.733339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.906 BaseBdev2 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.906 [ 00:09:09.906 { 00:09:09.906 "name": "BaseBdev2", 00:09:09.906 "aliases": [ 00:09:09.906 "7be1d211-10ce-4ef5-b471-5990815c53a3" 00:09:09.906 ], 00:09:09.906 "product_name": "Malloc disk", 00:09:09.906 "block_size": 512, 00:09:09.906 "num_blocks": 65536, 00:09:09.906 "uuid": "7be1d211-10ce-4ef5-b471-5990815c53a3", 00:09:09.906 "assigned_rate_limits": { 00:09:09.906 "rw_ios_per_sec": 0, 00:09:09.906 "rw_mbytes_per_sec": 0, 00:09:09.906 "r_mbytes_per_sec": 0, 00:09:09.906 "w_mbytes_per_sec": 0 00:09:09.906 }, 00:09:09.906 "claimed": true, 00:09:09.906 "claim_type": "exclusive_write", 00:09:09.906 "zoned": false, 00:09:09.906 "supported_io_types": { 00:09:09.906 "read": true, 00:09:09.906 "write": true, 00:09:09.906 "unmap": true, 00:09:09.906 "flush": true, 00:09:09.906 "reset": true, 00:09:09.906 "nvme_admin": false, 00:09:09.906 "nvme_io": false, 00:09:09.906 "nvme_io_md": false, 00:09:09.906 "write_zeroes": true, 00:09:09.906 "zcopy": true, 00:09:09.906 "get_zone_info": false, 00:09:09.906 "zone_management": false, 00:09:09.906 "zone_append": false, 00:09:09.906 "compare": false, 00:09:09.906 "compare_and_write": false, 00:09:09.906 "abort": true, 00:09:09.906 "seek_hole": false, 00:09:09.906 "seek_data": false, 00:09:09.906 "copy": true, 00:09:09.906 "nvme_iov_md": false 00:09:09.906 }, 00:09:09.906 "memory_domains": [ 00:09:09.906 { 00:09:09.906 "dma_device_id": "system", 00:09:09.906 "dma_device_type": 1 00:09:09.906 }, 00:09:09.906 { 00:09:09.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.906 "dma_device_type": 2 00:09:09.906 } 00:09:09.906 ], 00:09:09.906 "driver_specific": {} 00:09:09.906 } 00:09:09.906 ] 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.906 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.166 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.166 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.166 "name": "Existed_Raid", 00:09:10.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.166 "strip_size_kb": 0, 00:09:10.166 "state": "configuring", 00:09:10.166 "raid_level": "raid1", 00:09:10.166 "superblock": false, 00:09:10.166 "num_base_bdevs": 3, 00:09:10.166 "num_base_bdevs_discovered": 2, 00:09:10.166 "num_base_bdevs_operational": 3, 00:09:10.166 "base_bdevs_list": [ 00:09:10.166 { 00:09:10.166 "name": "BaseBdev1", 00:09:10.166 "uuid": "9b96827a-4209-4368-a317-a937598f8cb0", 00:09:10.166 "is_configured": true, 00:09:10.166 "data_offset": 0, 00:09:10.166 "data_size": 65536 00:09:10.166 }, 00:09:10.166 { 00:09:10.166 "name": "BaseBdev2", 00:09:10.166 "uuid": "7be1d211-10ce-4ef5-b471-5990815c53a3", 00:09:10.166 "is_configured": true, 00:09:10.166 "data_offset": 0, 00:09:10.166 "data_size": 65536 00:09:10.166 }, 00:09:10.166 { 00:09:10.166 "name": "BaseBdev3", 00:09:10.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.166 "is_configured": false, 00:09:10.166 "data_offset": 0, 00:09:10.166 "data_size": 0 00:09:10.166 } 00:09:10.166 ] 00:09:10.166 }' 00:09:10.166 01:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.166 01:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.425 [2024-10-09 01:29:09.234166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.425 [2024-10-09 01:29:09.234291] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.425 [2024-10-09 01:29:09.234304] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:10.425 [2024-10-09 01:29:09.234638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:10.425 [2024-10-09 01:29:09.234801] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.425 [2024-10-09 01:29:09.234815] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:10.425 [2024-10-09 01:29:09.235017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.425 BaseBdev3 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.425 [ 00:09:10.425 { 00:09:10.425 "name": "BaseBdev3", 00:09:10.425 "aliases": [ 00:09:10.425 "9bd82f38-278b-4e72-ac61-d2ffc7ea3b9c" 00:09:10.425 ], 00:09:10.425 "product_name": "Malloc disk", 00:09:10.425 "block_size": 512, 00:09:10.425 "num_blocks": 65536, 00:09:10.425 "uuid": "9bd82f38-278b-4e72-ac61-d2ffc7ea3b9c", 00:09:10.425 "assigned_rate_limits": { 00:09:10.425 "rw_ios_per_sec": 0, 00:09:10.425 "rw_mbytes_per_sec": 0, 00:09:10.425 "r_mbytes_per_sec": 0, 00:09:10.425 "w_mbytes_per_sec": 0 00:09:10.425 }, 00:09:10.425 "claimed": true, 00:09:10.425 "claim_type": "exclusive_write", 00:09:10.425 "zoned": false, 00:09:10.425 "supported_io_types": { 00:09:10.425 "read": true, 00:09:10.425 "write": true, 00:09:10.425 "unmap": true, 00:09:10.425 "flush": true, 00:09:10.425 "reset": true, 00:09:10.425 "nvme_admin": false, 00:09:10.425 "nvme_io": false, 00:09:10.425 "nvme_io_md": false, 00:09:10.425 "write_zeroes": true, 00:09:10.425 "zcopy": true, 00:09:10.425 "get_zone_info": false, 00:09:10.425 "zone_management": false, 00:09:10.425 "zone_append": false, 00:09:10.425 "compare": false, 00:09:10.425 "compare_and_write": false, 00:09:10.425 "abort": true, 00:09:10.425 "seek_hole": false, 00:09:10.425 "seek_data": false, 00:09:10.425 "copy": true, 00:09:10.425 "nvme_iov_md": false 00:09:10.425 }, 00:09:10.425 "memory_domains": [ 00:09:10.425 { 00:09:10.425 "dma_device_id": "system", 00:09:10.425 "dma_device_type": 1 00:09:10.425 }, 00:09:10.425 { 00:09:10.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.425 "dma_device_type": 2 00:09:10.425 } 00:09:10.425 ], 00:09:10.425 "driver_specific": {} 00:09:10.425 } 00:09:10.425 ] 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.425 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.684 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.684 "name": "Existed_Raid", 00:09:10.684 "uuid": "9fabf14a-813b-4bb0-a250-94e709a0ef60", 00:09:10.684 "strip_size_kb": 0, 00:09:10.684 "state": "online", 00:09:10.684 "raid_level": "raid1", 00:09:10.684 "superblock": false, 00:09:10.684 "num_base_bdevs": 3, 00:09:10.684 "num_base_bdevs_discovered": 3, 00:09:10.684 "num_base_bdevs_operational": 3, 00:09:10.684 "base_bdevs_list": [ 00:09:10.684 { 00:09:10.684 "name": "BaseBdev1", 00:09:10.684 "uuid": "9b96827a-4209-4368-a317-a937598f8cb0", 00:09:10.684 "is_configured": true, 00:09:10.684 "data_offset": 0, 00:09:10.684 "data_size": 65536 00:09:10.684 }, 00:09:10.684 { 00:09:10.684 "name": "BaseBdev2", 00:09:10.684 "uuid": "7be1d211-10ce-4ef5-b471-5990815c53a3", 00:09:10.684 "is_configured": true, 00:09:10.684 "data_offset": 0, 00:09:10.684 "data_size": 65536 00:09:10.684 }, 00:09:10.684 { 00:09:10.684 "name": "BaseBdev3", 00:09:10.684 "uuid": "9bd82f38-278b-4e72-ac61-d2ffc7ea3b9c", 00:09:10.684 "is_configured": true, 00:09:10.684 "data_offset": 0, 00:09:10.684 "data_size": 65536 00:09:10.684 } 00:09:10.684 ] 00:09:10.684 }' 00:09:10.684 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.684 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.943 [2024-10-09 01:29:09.730604] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.943 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.943 "name": "Existed_Raid", 00:09:10.943 "aliases": [ 00:09:10.943 "9fabf14a-813b-4bb0-a250-94e709a0ef60" 00:09:10.943 ], 00:09:10.943 "product_name": "Raid Volume", 00:09:10.943 "block_size": 512, 00:09:10.943 "num_blocks": 65536, 00:09:10.943 "uuid": "9fabf14a-813b-4bb0-a250-94e709a0ef60", 00:09:10.943 "assigned_rate_limits": { 00:09:10.943 "rw_ios_per_sec": 0, 00:09:10.943 "rw_mbytes_per_sec": 0, 00:09:10.943 "r_mbytes_per_sec": 0, 00:09:10.943 "w_mbytes_per_sec": 0 00:09:10.943 }, 00:09:10.943 "claimed": false, 00:09:10.943 "zoned": false, 00:09:10.943 "supported_io_types": { 00:09:10.943 "read": true, 00:09:10.943 "write": true, 00:09:10.943 "unmap": false, 00:09:10.943 "flush": false, 00:09:10.943 "reset": true, 00:09:10.943 "nvme_admin": false, 00:09:10.943 "nvme_io": false, 00:09:10.943 "nvme_io_md": false, 00:09:10.943 "write_zeroes": true, 00:09:10.943 "zcopy": false, 00:09:10.943 "get_zone_info": false, 00:09:10.943 "zone_management": false, 00:09:10.943 "zone_append": false, 00:09:10.943 "compare": false, 00:09:10.943 "compare_and_write": false, 00:09:10.943 "abort": false, 00:09:10.943 "seek_hole": false, 00:09:10.943 "seek_data": false, 00:09:10.943 "copy": false, 00:09:10.943 "nvme_iov_md": false 00:09:10.943 }, 00:09:10.943 "memory_domains": [ 00:09:10.943 { 00:09:10.943 "dma_device_id": "system", 00:09:10.943 "dma_device_type": 1 00:09:10.943 }, 00:09:10.943 { 00:09:10.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.943 "dma_device_type": 2 00:09:10.943 }, 00:09:10.943 { 00:09:10.943 "dma_device_id": "system", 00:09:10.943 "dma_device_type": 1 00:09:10.943 }, 00:09:10.943 { 00:09:10.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.943 "dma_device_type": 2 00:09:10.943 }, 00:09:10.943 { 00:09:10.943 "dma_device_id": "system", 00:09:10.943 "dma_device_type": 1 00:09:10.943 }, 00:09:10.943 { 00:09:10.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.943 "dma_device_type": 2 00:09:10.943 } 00:09:10.943 ], 00:09:10.943 "driver_specific": { 00:09:10.943 "raid": { 00:09:10.943 "uuid": "9fabf14a-813b-4bb0-a250-94e709a0ef60", 00:09:10.943 "strip_size_kb": 0, 00:09:10.943 "state": "online", 00:09:10.943 "raid_level": "raid1", 00:09:10.943 "superblock": false, 00:09:10.943 "num_base_bdevs": 3, 00:09:10.943 "num_base_bdevs_discovered": 3, 00:09:10.943 "num_base_bdevs_operational": 3, 00:09:10.943 "base_bdevs_list": [ 00:09:10.943 { 00:09:10.943 "name": "BaseBdev1", 00:09:10.944 "uuid": "9b96827a-4209-4368-a317-a937598f8cb0", 00:09:10.944 "is_configured": true, 00:09:10.944 "data_offset": 0, 00:09:10.944 "data_size": 65536 00:09:10.944 }, 00:09:10.944 { 00:09:10.944 "name": "BaseBdev2", 00:09:10.944 "uuid": "7be1d211-10ce-4ef5-b471-5990815c53a3", 00:09:10.944 "is_configured": true, 00:09:10.944 "data_offset": 0, 00:09:10.944 "data_size": 65536 00:09:10.944 }, 00:09:10.944 { 00:09:10.944 "name": "BaseBdev3", 00:09:10.944 "uuid": "9bd82f38-278b-4e72-ac61-d2ffc7ea3b9c", 00:09:10.944 "is_configured": true, 00:09:10.944 "data_offset": 0, 00:09:10.944 "data_size": 65536 00:09:10.944 } 00:09:10.944 ] 00:09:10.944 } 00:09:10.944 } 00:09:10.944 }' 00:09:10.944 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.944 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:10.944 BaseBdev2 00:09:10.944 BaseBdev3' 00:09:10.944 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.204 01:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.204 [2024-10-09 01:29:10.026457] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.204 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.464 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.464 "name": "Existed_Raid", 00:09:11.464 "uuid": "9fabf14a-813b-4bb0-a250-94e709a0ef60", 00:09:11.464 "strip_size_kb": 0, 00:09:11.464 "state": "online", 00:09:11.464 "raid_level": "raid1", 00:09:11.464 "superblock": false, 00:09:11.464 "num_base_bdevs": 3, 00:09:11.464 "num_base_bdevs_discovered": 2, 00:09:11.464 "num_base_bdevs_operational": 2, 00:09:11.464 "base_bdevs_list": [ 00:09:11.464 { 00:09:11.464 "name": null, 00:09:11.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.464 "is_configured": false, 00:09:11.464 "data_offset": 0, 00:09:11.464 "data_size": 65536 00:09:11.464 }, 00:09:11.464 { 00:09:11.464 "name": "BaseBdev2", 00:09:11.464 "uuid": "7be1d211-10ce-4ef5-b471-5990815c53a3", 00:09:11.464 "is_configured": true, 00:09:11.464 "data_offset": 0, 00:09:11.464 "data_size": 65536 00:09:11.464 }, 00:09:11.464 { 00:09:11.464 "name": "BaseBdev3", 00:09:11.464 "uuid": "9bd82f38-278b-4e72-ac61-d2ffc7ea3b9c", 00:09:11.464 "is_configured": true, 00:09:11.464 "data_offset": 0, 00:09:11.464 "data_size": 65536 00:09:11.464 } 00:09:11.464 ] 00:09:11.464 }' 00:09:11.464 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.464 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.724 [2024-10-09 01:29:10.498950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.724 [2024-10-09 01:29:10.579263] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.724 [2024-10-09 01:29:10.579421] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.724 [2024-10-09 01:29:10.598961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.724 [2024-10-09 01:29:10.599070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.724 [2024-10-09 01:29:10.599108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.724 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.984 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 BaseBdev2 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 [ 00:09:11.985 { 00:09:11.985 "name": "BaseBdev2", 00:09:11.985 "aliases": [ 00:09:11.985 "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8" 00:09:11.985 ], 00:09:11.985 "product_name": "Malloc disk", 00:09:11.985 "block_size": 512, 00:09:11.985 "num_blocks": 65536, 00:09:11.985 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:11.985 "assigned_rate_limits": { 00:09:11.985 "rw_ios_per_sec": 0, 00:09:11.985 "rw_mbytes_per_sec": 0, 00:09:11.985 "r_mbytes_per_sec": 0, 00:09:11.985 "w_mbytes_per_sec": 0 00:09:11.985 }, 00:09:11.985 "claimed": false, 00:09:11.985 "zoned": false, 00:09:11.985 "supported_io_types": { 00:09:11.985 "read": true, 00:09:11.985 "write": true, 00:09:11.985 "unmap": true, 00:09:11.985 "flush": true, 00:09:11.985 "reset": true, 00:09:11.985 "nvme_admin": false, 00:09:11.985 "nvme_io": false, 00:09:11.985 "nvme_io_md": false, 00:09:11.985 "write_zeroes": true, 00:09:11.985 "zcopy": true, 00:09:11.985 "get_zone_info": false, 00:09:11.985 "zone_management": false, 00:09:11.985 "zone_append": false, 00:09:11.985 "compare": false, 00:09:11.985 "compare_and_write": false, 00:09:11.985 "abort": true, 00:09:11.985 "seek_hole": false, 00:09:11.985 "seek_data": false, 00:09:11.985 "copy": true, 00:09:11.985 "nvme_iov_md": false 00:09:11.985 }, 00:09:11.985 "memory_domains": [ 00:09:11.985 { 00:09:11.985 "dma_device_id": "system", 00:09:11.985 "dma_device_type": 1 00:09:11.985 }, 00:09:11.985 { 00:09:11.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.985 "dma_device_type": 2 00:09:11.985 } 00:09:11.985 ], 00:09:11.985 "driver_specific": {} 00:09:11.985 } 00:09:11.985 ] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 BaseBdev3 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 [ 00:09:11.985 { 00:09:11.985 "name": "BaseBdev3", 00:09:11.985 "aliases": [ 00:09:11.985 "85ca11d2-e66d-447d-9b13-ab962b47ee09" 00:09:11.985 ], 00:09:11.985 "product_name": "Malloc disk", 00:09:11.985 "block_size": 512, 00:09:11.985 "num_blocks": 65536, 00:09:11.985 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:11.985 "assigned_rate_limits": { 00:09:11.985 "rw_ios_per_sec": 0, 00:09:11.985 "rw_mbytes_per_sec": 0, 00:09:11.985 "r_mbytes_per_sec": 0, 00:09:11.985 "w_mbytes_per_sec": 0 00:09:11.985 }, 00:09:11.985 "claimed": false, 00:09:11.985 "zoned": false, 00:09:11.985 "supported_io_types": { 00:09:11.985 "read": true, 00:09:11.985 "write": true, 00:09:11.985 "unmap": true, 00:09:11.985 "flush": true, 00:09:11.985 "reset": true, 00:09:11.985 "nvme_admin": false, 00:09:11.985 "nvme_io": false, 00:09:11.985 "nvme_io_md": false, 00:09:11.985 "write_zeroes": true, 00:09:11.985 "zcopy": true, 00:09:11.985 "get_zone_info": false, 00:09:11.985 "zone_management": false, 00:09:11.985 "zone_append": false, 00:09:11.985 "compare": false, 00:09:11.985 "compare_and_write": false, 00:09:11.985 "abort": true, 00:09:11.985 "seek_hole": false, 00:09:11.985 "seek_data": false, 00:09:11.985 "copy": true, 00:09:11.985 "nvme_iov_md": false 00:09:11.985 }, 00:09:11.985 "memory_domains": [ 00:09:11.985 { 00:09:11.985 "dma_device_id": "system", 00:09:11.985 "dma_device_type": 1 00:09:11.985 }, 00:09:11.985 { 00:09:11.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.985 "dma_device_type": 2 00:09:11.985 } 00:09:11.985 ], 00:09:11.985 "driver_specific": {} 00:09:11.985 } 00:09:11.985 ] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 [2024-10-09 01:29:10.777785] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.985 [2024-10-09 01:29:10.777901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.985 [2024-10-09 01:29:10.777929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.985 [2024-10-09 01:29:10.779943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.985 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.985 "name": "Existed_Raid", 00:09:11.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.985 "strip_size_kb": 0, 00:09:11.985 "state": "configuring", 00:09:11.985 "raid_level": "raid1", 00:09:11.985 "superblock": false, 00:09:11.985 "num_base_bdevs": 3, 00:09:11.985 "num_base_bdevs_discovered": 2, 00:09:11.986 "num_base_bdevs_operational": 3, 00:09:11.986 "base_bdevs_list": [ 00:09:11.986 { 00:09:11.986 "name": "BaseBdev1", 00:09:11.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.986 "is_configured": false, 00:09:11.986 "data_offset": 0, 00:09:11.986 "data_size": 0 00:09:11.986 }, 00:09:11.986 { 00:09:11.986 "name": "BaseBdev2", 00:09:11.986 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:11.986 "is_configured": true, 00:09:11.986 "data_offset": 0, 00:09:11.986 "data_size": 65536 00:09:11.986 }, 00:09:11.986 { 00:09:11.986 "name": "BaseBdev3", 00:09:11.986 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:11.986 "is_configured": true, 00:09:11.986 "data_offset": 0, 00:09:11.986 "data_size": 65536 00:09:11.986 } 00:09:11.986 ] 00:09:11.986 }' 00:09:11.986 01:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.986 01:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.555 [2024-10-09 01:29:11.205881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.555 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.555 "name": "Existed_Raid", 00:09:12.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.555 "strip_size_kb": 0, 00:09:12.555 "state": "configuring", 00:09:12.556 "raid_level": "raid1", 00:09:12.556 "superblock": false, 00:09:12.556 "num_base_bdevs": 3, 00:09:12.556 "num_base_bdevs_discovered": 1, 00:09:12.556 "num_base_bdevs_operational": 3, 00:09:12.556 "base_bdevs_list": [ 00:09:12.556 { 00:09:12.556 "name": "BaseBdev1", 00:09:12.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.556 "is_configured": false, 00:09:12.556 "data_offset": 0, 00:09:12.556 "data_size": 0 00:09:12.556 }, 00:09:12.556 { 00:09:12.556 "name": null, 00:09:12.556 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:12.556 "is_configured": false, 00:09:12.556 "data_offset": 0, 00:09:12.556 "data_size": 65536 00:09:12.556 }, 00:09:12.556 { 00:09:12.556 "name": "BaseBdev3", 00:09:12.556 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:12.556 "is_configured": true, 00:09:12.556 "data_offset": 0, 00:09:12.556 "data_size": 65536 00:09:12.556 } 00:09:12.556 ] 00:09:12.556 }' 00:09:12.556 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.556 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.816 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.816 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.816 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.816 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:12.816 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.076 BaseBdev1 00:09:13.076 [2024-10-09 01:29:11.758904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.076 [ 00:09:13.076 { 00:09:13.076 "name": "BaseBdev1", 00:09:13.076 "aliases": [ 00:09:13.076 "923ff482-a1d3-4cf1-9989-6886d5801eb1" 00:09:13.076 ], 00:09:13.076 "product_name": "Malloc disk", 00:09:13.076 "block_size": 512, 00:09:13.076 "num_blocks": 65536, 00:09:13.076 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:13.076 "assigned_rate_limits": { 00:09:13.076 "rw_ios_per_sec": 0, 00:09:13.076 "rw_mbytes_per_sec": 0, 00:09:13.076 "r_mbytes_per_sec": 0, 00:09:13.076 "w_mbytes_per_sec": 0 00:09:13.076 }, 00:09:13.076 "claimed": true, 00:09:13.076 "claim_type": "exclusive_write", 00:09:13.076 "zoned": false, 00:09:13.076 "supported_io_types": { 00:09:13.076 "read": true, 00:09:13.076 "write": true, 00:09:13.076 "unmap": true, 00:09:13.076 "flush": true, 00:09:13.076 "reset": true, 00:09:13.076 "nvme_admin": false, 00:09:13.076 "nvme_io": false, 00:09:13.076 "nvme_io_md": false, 00:09:13.076 "write_zeroes": true, 00:09:13.076 "zcopy": true, 00:09:13.076 "get_zone_info": false, 00:09:13.076 "zone_management": false, 00:09:13.076 "zone_append": false, 00:09:13.076 "compare": false, 00:09:13.076 "compare_and_write": false, 00:09:13.076 "abort": true, 00:09:13.076 "seek_hole": false, 00:09:13.076 "seek_data": false, 00:09:13.076 "copy": true, 00:09:13.076 "nvme_iov_md": false 00:09:13.076 }, 00:09:13.076 "memory_domains": [ 00:09:13.076 { 00:09:13.076 "dma_device_id": "system", 00:09:13.076 "dma_device_type": 1 00:09:13.076 }, 00:09:13.076 { 00:09:13.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.076 "dma_device_type": 2 00:09:13.076 } 00:09:13.076 ], 00:09:13.076 "driver_specific": {} 00:09:13.076 } 00:09:13.076 ] 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.076 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.077 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.077 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.077 "name": "Existed_Raid", 00:09:13.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.077 "strip_size_kb": 0, 00:09:13.077 "state": "configuring", 00:09:13.077 "raid_level": "raid1", 00:09:13.077 "superblock": false, 00:09:13.077 "num_base_bdevs": 3, 00:09:13.077 "num_base_bdevs_discovered": 2, 00:09:13.077 "num_base_bdevs_operational": 3, 00:09:13.077 "base_bdevs_list": [ 00:09:13.077 { 00:09:13.077 "name": "BaseBdev1", 00:09:13.077 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:13.077 "is_configured": true, 00:09:13.077 "data_offset": 0, 00:09:13.077 "data_size": 65536 00:09:13.077 }, 00:09:13.077 { 00:09:13.077 "name": null, 00:09:13.077 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:13.077 "is_configured": false, 00:09:13.077 "data_offset": 0, 00:09:13.077 "data_size": 65536 00:09:13.077 }, 00:09:13.077 { 00:09:13.077 "name": "BaseBdev3", 00:09:13.077 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:13.077 "is_configured": true, 00:09:13.077 "data_offset": 0, 00:09:13.077 "data_size": 65536 00:09:13.077 } 00:09:13.077 ] 00:09:13.077 }' 00:09:13.077 01:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.077 01:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.646 [2024-10-09 01:29:12.307094] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.646 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.647 "name": "Existed_Raid", 00:09:13.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.647 "strip_size_kb": 0, 00:09:13.647 "state": "configuring", 00:09:13.647 "raid_level": "raid1", 00:09:13.647 "superblock": false, 00:09:13.647 "num_base_bdevs": 3, 00:09:13.647 "num_base_bdevs_discovered": 1, 00:09:13.647 "num_base_bdevs_operational": 3, 00:09:13.647 "base_bdevs_list": [ 00:09:13.647 { 00:09:13.647 "name": "BaseBdev1", 00:09:13.647 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:13.647 "is_configured": true, 00:09:13.647 "data_offset": 0, 00:09:13.647 "data_size": 65536 00:09:13.647 }, 00:09:13.647 { 00:09:13.647 "name": null, 00:09:13.647 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:13.647 "is_configured": false, 00:09:13.647 "data_offset": 0, 00:09:13.647 "data_size": 65536 00:09:13.647 }, 00:09:13.647 { 00:09:13.647 "name": null, 00:09:13.647 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:13.647 "is_configured": false, 00:09:13.647 "data_offset": 0, 00:09:13.647 "data_size": 65536 00:09:13.647 } 00:09:13.647 ] 00:09:13.647 }' 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.647 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.907 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.167 [2024-10-09 01:29:12.799238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.167 "name": "Existed_Raid", 00:09:14.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.167 "strip_size_kb": 0, 00:09:14.167 "state": "configuring", 00:09:14.167 "raid_level": "raid1", 00:09:14.167 "superblock": false, 00:09:14.167 "num_base_bdevs": 3, 00:09:14.167 "num_base_bdevs_discovered": 2, 00:09:14.167 "num_base_bdevs_operational": 3, 00:09:14.167 "base_bdevs_list": [ 00:09:14.167 { 00:09:14.167 "name": "BaseBdev1", 00:09:14.167 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:14.167 "is_configured": true, 00:09:14.167 "data_offset": 0, 00:09:14.167 "data_size": 65536 00:09:14.167 }, 00:09:14.167 { 00:09:14.167 "name": null, 00:09:14.167 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:14.167 "is_configured": false, 00:09:14.167 "data_offset": 0, 00:09:14.167 "data_size": 65536 00:09:14.167 }, 00:09:14.167 { 00:09:14.167 "name": "BaseBdev3", 00:09:14.167 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:14.167 "is_configured": true, 00:09:14.167 "data_offset": 0, 00:09:14.167 "data_size": 65536 00:09:14.167 } 00:09:14.167 ] 00:09:14.167 }' 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.167 01:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.427 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.427 [2024-10-09 01:29:13.299420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.687 "name": "Existed_Raid", 00:09:14.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.687 "strip_size_kb": 0, 00:09:14.687 "state": "configuring", 00:09:14.687 "raid_level": "raid1", 00:09:14.687 "superblock": false, 00:09:14.687 "num_base_bdevs": 3, 00:09:14.687 "num_base_bdevs_discovered": 1, 00:09:14.687 "num_base_bdevs_operational": 3, 00:09:14.687 "base_bdevs_list": [ 00:09:14.687 { 00:09:14.687 "name": null, 00:09:14.687 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:14.687 "is_configured": false, 00:09:14.687 "data_offset": 0, 00:09:14.687 "data_size": 65536 00:09:14.687 }, 00:09:14.687 { 00:09:14.687 "name": null, 00:09:14.687 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:14.687 "is_configured": false, 00:09:14.687 "data_offset": 0, 00:09:14.687 "data_size": 65536 00:09:14.687 }, 00:09:14.687 { 00:09:14.687 "name": "BaseBdev3", 00:09:14.687 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:14.687 "is_configured": true, 00:09:14.687 "data_offset": 0, 00:09:14.687 "data_size": 65536 00:09:14.687 } 00:09:14.687 ] 00:09:14.687 }' 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.687 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.947 [2024-10-09 01:29:13.739398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.947 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.948 "name": "Existed_Raid", 00:09:14.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.948 "strip_size_kb": 0, 00:09:14.948 "state": "configuring", 00:09:14.948 "raid_level": "raid1", 00:09:14.948 "superblock": false, 00:09:14.948 "num_base_bdevs": 3, 00:09:14.948 "num_base_bdevs_discovered": 2, 00:09:14.948 "num_base_bdevs_operational": 3, 00:09:14.948 "base_bdevs_list": [ 00:09:14.948 { 00:09:14.948 "name": null, 00:09:14.948 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:14.948 "is_configured": false, 00:09:14.948 "data_offset": 0, 00:09:14.948 "data_size": 65536 00:09:14.948 }, 00:09:14.948 { 00:09:14.948 "name": "BaseBdev2", 00:09:14.948 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:14.948 "is_configured": true, 00:09:14.948 "data_offset": 0, 00:09:14.948 "data_size": 65536 00:09:14.948 }, 00:09:14.948 { 00:09:14.948 "name": "BaseBdev3", 00:09:14.948 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:14.948 "is_configured": true, 00:09:14.948 "data_offset": 0, 00:09:14.948 "data_size": 65536 00:09:14.948 } 00:09:14.948 ] 00:09:14.948 }' 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.948 01:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 923ff482-a1d3-4cf1-9989-6886d5801eb1 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.518 [2024-10-09 01:29:14.243898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.518 [2024-10-09 01:29:14.243945] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.518 [2024-10-09 01:29:14.243955] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:15.518 [2024-10-09 01:29:14.244221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:15.518 [2024-10-09 01:29:14.244384] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.518 [2024-10-09 01:29:14.244394] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:15.518 [2024-10-09 01:29:14.244617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.518 NewBaseBdev 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.518 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.518 [ 00:09:15.518 { 00:09:15.518 "name": "NewBaseBdev", 00:09:15.518 "aliases": [ 00:09:15.518 "923ff482-a1d3-4cf1-9989-6886d5801eb1" 00:09:15.518 ], 00:09:15.518 "product_name": "Malloc disk", 00:09:15.518 "block_size": 512, 00:09:15.518 "num_blocks": 65536, 00:09:15.518 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:15.518 "assigned_rate_limits": { 00:09:15.518 "rw_ios_per_sec": 0, 00:09:15.518 "rw_mbytes_per_sec": 0, 00:09:15.518 "r_mbytes_per_sec": 0, 00:09:15.518 "w_mbytes_per_sec": 0 00:09:15.518 }, 00:09:15.518 "claimed": true, 00:09:15.518 "claim_type": "exclusive_write", 00:09:15.518 "zoned": false, 00:09:15.519 "supported_io_types": { 00:09:15.519 "read": true, 00:09:15.519 "write": true, 00:09:15.519 "unmap": true, 00:09:15.519 "flush": true, 00:09:15.519 "reset": true, 00:09:15.519 "nvme_admin": false, 00:09:15.519 "nvme_io": false, 00:09:15.519 "nvme_io_md": false, 00:09:15.519 "write_zeroes": true, 00:09:15.519 "zcopy": true, 00:09:15.519 "get_zone_info": false, 00:09:15.519 "zone_management": false, 00:09:15.519 "zone_append": false, 00:09:15.519 "compare": false, 00:09:15.519 "compare_and_write": false, 00:09:15.519 "abort": true, 00:09:15.519 "seek_hole": false, 00:09:15.519 "seek_data": false, 00:09:15.519 "copy": true, 00:09:15.519 "nvme_iov_md": false 00:09:15.519 }, 00:09:15.519 "memory_domains": [ 00:09:15.519 { 00:09:15.519 "dma_device_id": "system", 00:09:15.519 "dma_device_type": 1 00:09:15.519 }, 00:09:15.519 { 00:09:15.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.519 "dma_device_type": 2 00:09:15.519 } 00:09:15.519 ], 00:09:15.519 "driver_specific": {} 00:09:15.519 } 00:09:15.519 ] 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.519 "name": "Existed_Raid", 00:09:15.519 "uuid": "c87181bc-8bb1-48a5-b8c7-22fae7f662f4", 00:09:15.519 "strip_size_kb": 0, 00:09:15.519 "state": "online", 00:09:15.519 "raid_level": "raid1", 00:09:15.519 "superblock": false, 00:09:15.519 "num_base_bdevs": 3, 00:09:15.519 "num_base_bdevs_discovered": 3, 00:09:15.519 "num_base_bdevs_operational": 3, 00:09:15.519 "base_bdevs_list": [ 00:09:15.519 { 00:09:15.519 "name": "NewBaseBdev", 00:09:15.519 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:15.519 "is_configured": true, 00:09:15.519 "data_offset": 0, 00:09:15.519 "data_size": 65536 00:09:15.519 }, 00:09:15.519 { 00:09:15.519 "name": "BaseBdev2", 00:09:15.519 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:15.519 "is_configured": true, 00:09:15.519 "data_offset": 0, 00:09:15.519 "data_size": 65536 00:09:15.519 }, 00:09:15.519 { 00:09:15.519 "name": "BaseBdev3", 00:09:15.519 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:15.519 "is_configured": true, 00:09:15.519 "data_offset": 0, 00:09:15.519 "data_size": 65536 00:09:15.519 } 00:09:15.519 ] 00:09:15.519 }' 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.519 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.090 [2024-10-09 01:29:14.712422] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.090 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.090 "name": "Existed_Raid", 00:09:16.090 "aliases": [ 00:09:16.090 "c87181bc-8bb1-48a5-b8c7-22fae7f662f4" 00:09:16.090 ], 00:09:16.090 "product_name": "Raid Volume", 00:09:16.090 "block_size": 512, 00:09:16.090 "num_blocks": 65536, 00:09:16.090 "uuid": "c87181bc-8bb1-48a5-b8c7-22fae7f662f4", 00:09:16.090 "assigned_rate_limits": { 00:09:16.090 "rw_ios_per_sec": 0, 00:09:16.090 "rw_mbytes_per_sec": 0, 00:09:16.090 "r_mbytes_per_sec": 0, 00:09:16.090 "w_mbytes_per_sec": 0 00:09:16.090 }, 00:09:16.090 "claimed": false, 00:09:16.090 "zoned": false, 00:09:16.090 "supported_io_types": { 00:09:16.090 "read": true, 00:09:16.090 "write": true, 00:09:16.090 "unmap": false, 00:09:16.090 "flush": false, 00:09:16.090 "reset": true, 00:09:16.090 "nvme_admin": false, 00:09:16.090 "nvme_io": false, 00:09:16.090 "nvme_io_md": false, 00:09:16.090 "write_zeroes": true, 00:09:16.090 "zcopy": false, 00:09:16.090 "get_zone_info": false, 00:09:16.090 "zone_management": false, 00:09:16.090 "zone_append": false, 00:09:16.090 "compare": false, 00:09:16.090 "compare_and_write": false, 00:09:16.090 "abort": false, 00:09:16.090 "seek_hole": false, 00:09:16.090 "seek_data": false, 00:09:16.090 "copy": false, 00:09:16.090 "nvme_iov_md": false 00:09:16.090 }, 00:09:16.090 "memory_domains": [ 00:09:16.090 { 00:09:16.090 "dma_device_id": "system", 00:09:16.090 "dma_device_type": 1 00:09:16.090 }, 00:09:16.090 { 00:09:16.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.090 "dma_device_type": 2 00:09:16.090 }, 00:09:16.090 { 00:09:16.090 "dma_device_id": "system", 00:09:16.090 "dma_device_type": 1 00:09:16.090 }, 00:09:16.090 { 00:09:16.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.090 "dma_device_type": 2 00:09:16.090 }, 00:09:16.090 { 00:09:16.090 "dma_device_id": "system", 00:09:16.090 "dma_device_type": 1 00:09:16.090 }, 00:09:16.090 { 00:09:16.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.090 "dma_device_type": 2 00:09:16.090 } 00:09:16.090 ], 00:09:16.090 "driver_specific": { 00:09:16.090 "raid": { 00:09:16.090 "uuid": "c87181bc-8bb1-48a5-b8c7-22fae7f662f4", 00:09:16.090 "strip_size_kb": 0, 00:09:16.090 "state": "online", 00:09:16.090 "raid_level": "raid1", 00:09:16.091 "superblock": false, 00:09:16.091 "num_base_bdevs": 3, 00:09:16.091 "num_base_bdevs_discovered": 3, 00:09:16.091 "num_base_bdevs_operational": 3, 00:09:16.091 "base_bdevs_list": [ 00:09:16.091 { 00:09:16.091 "name": "NewBaseBdev", 00:09:16.091 "uuid": "923ff482-a1d3-4cf1-9989-6886d5801eb1", 00:09:16.091 "is_configured": true, 00:09:16.091 "data_offset": 0, 00:09:16.091 "data_size": 65536 00:09:16.091 }, 00:09:16.091 { 00:09:16.091 "name": "BaseBdev2", 00:09:16.091 "uuid": "f6c1d13b-9d9d-4868-a7f4-7869cafd2ee8", 00:09:16.091 "is_configured": true, 00:09:16.091 "data_offset": 0, 00:09:16.091 "data_size": 65536 00:09:16.091 }, 00:09:16.091 { 00:09:16.091 "name": "BaseBdev3", 00:09:16.091 "uuid": "85ca11d2-e66d-447d-9b13-ab962b47ee09", 00:09:16.091 "is_configured": true, 00:09:16.091 "data_offset": 0, 00:09:16.091 "data_size": 65536 00:09:16.091 } 00:09:16.091 ] 00:09:16.091 } 00:09:16.091 } 00:09:16.091 }' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.091 BaseBdev2 00:09:16.091 BaseBdev3' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.091 [2024-10-09 01:29:14.972105] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.091 [2024-10-09 01:29:14.972135] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.091 [2024-10-09 01:29:14.972211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.091 [2024-10-09 01:29:14.972503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.091 [2024-10-09 01:29:14.972521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79562 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79562 ']' 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79562 00:09:16.091 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:16.352 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.352 01:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79562 00:09:16.352 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.352 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.352 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79562' 00:09:16.352 killing process with pid 79562 00:09:16.352 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 79562 00:09:16.352 [2024-10-09 01:29:15.023625] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.352 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 79562 00:09:16.352 [2024-10-09 01:29:15.081668] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.612 01:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:16.612 00:09:16.612 real 0m9.083s 00:09:16.612 user 0m15.200s 00:09:16.612 sys 0m1.952s 00:09:16.612 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.612 01:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.612 ************************************ 00:09:16.612 END TEST raid_state_function_test 00:09:16.612 ************************************ 00:09:16.873 01:29:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:16.874 01:29:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:16.874 01:29:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.874 01:29:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.874 ************************************ 00:09:16.874 START TEST raid_state_function_test_sb 00:09:16.874 ************************************ 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80167 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80167' 00:09:16.874 Process raid pid: 80167 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80167 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80167 ']' 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.874 01:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.874 [2024-10-09 01:29:15.622527] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:16.874 [2024-10-09 01:29:15.622659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.874 [2024-10-09 01:29:15.759779] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:17.209 [2024-10-09 01:29:15.789001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.209 [2024-10-09 01:29:15.861822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.209 [2024-10-09 01:29:15.937655] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.209 [2024-10-09 01:29:15.937701] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.777 [2024-10-09 01:29:16.453320] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.777 [2024-10-09 01:29:16.453377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.777 [2024-10-09 01:29:16.453390] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.777 [2024-10-09 01:29:16.453397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.777 [2024-10-09 01:29:16.453409] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.777 [2024-10-09 01:29:16.453416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.777 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.778 "name": "Existed_Raid", 00:09:17.778 "uuid": "93863810-67eb-442d-8aea-7ac429329556", 00:09:17.778 "strip_size_kb": 0, 00:09:17.778 "state": "configuring", 00:09:17.778 "raid_level": "raid1", 00:09:17.778 "superblock": true, 00:09:17.778 "num_base_bdevs": 3, 00:09:17.778 "num_base_bdevs_discovered": 0, 00:09:17.778 "num_base_bdevs_operational": 3, 00:09:17.778 "base_bdevs_list": [ 00:09:17.778 { 00:09:17.778 "name": "BaseBdev1", 00:09:17.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.778 "is_configured": false, 00:09:17.778 "data_offset": 0, 00:09:17.778 "data_size": 0 00:09:17.778 }, 00:09:17.778 { 00:09:17.778 "name": "BaseBdev2", 00:09:17.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.778 "is_configured": false, 00:09:17.778 "data_offset": 0, 00:09:17.778 "data_size": 0 00:09:17.778 }, 00:09:17.778 { 00:09:17.778 "name": "BaseBdev3", 00:09:17.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.778 "is_configured": false, 00:09:17.778 "data_offset": 0, 00:09:17.778 "data_size": 0 00:09:17.778 } 00:09:17.778 ] 00:09:17.778 }' 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.778 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.038 [2024-10-09 01:29:16.869298] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.038 [2024-10-09 01:29:16.869402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.038 [2024-10-09 01:29:16.881331] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.038 [2024-10-09 01:29:16.881411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.038 [2024-10-09 01:29:16.881442] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.038 [2024-10-09 01:29:16.881464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.038 [2024-10-09 01:29:16.881485] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.038 [2024-10-09 01:29:16.881505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.038 [2024-10-09 01:29:16.908193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.038 BaseBdev1 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.038 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.297 [ 00:09:18.297 { 00:09:18.297 "name": "BaseBdev1", 00:09:18.297 "aliases": [ 00:09:18.297 "244681dd-0689-43f1-ac7a-7346b07d70a1" 00:09:18.297 ], 00:09:18.297 "product_name": "Malloc disk", 00:09:18.297 "block_size": 512, 00:09:18.297 "num_blocks": 65536, 00:09:18.297 "uuid": "244681dd-0689-43f1-ac7a-7346b07d70a1", 00:09:18.297 "assigned_rate_limits": { 00:09:18.297 "rw_ios_per_sec": 0, 00:09:18.297 "rw_mbytes_per_sec": 0, 00:09:18.297 "r_mbytes_per_sec": 0, 00:09:18.297 "w_mbytes_per_sec": 0 00:09:18.297 }, 00:09:18.298 "claimed": true, 00:09:18.298 "claim_type": "exclusive_write", 00:09:18.298 "zoned": false, 00:09:18.298 "supported_io_types": { 00:09:18.298 "read": true, 00:09:18.298 "write": true, 00:09:18.298 "unmap": true, 00:09:18.298 "flush": true, 00:09:18.298 "reset": true, 00:09:18.298 "nvme_admin": false, 00:09:18.298 "nvme_io": false, 00:09:18.298 "nvme_io_md": false, 00:09:18.298 "write_zeroes": true, 00:09:18.298 "zcopy": true, 00:09:18.298 "get_zone_info": false, 00:09:18.298 "zone_management": false, 00:09:18.298 "zone_append": false, 00:09:18.298 "compare": false, 00:09:18.298 "compare_and_write": false, 00:09:18.298 "abort": true, 00:09:18.298 "seek_hole": false, 00:09:18.298 "seek_data": false, 00:09:18.298 "copy": true, 00:09:18.298 "nvme_iov_md": false 00:09:18.298 }, 00:09:18.298 "memory_domains": [ 00:09:18.298 { 00:09:18.298 "dma_device_id": "system", 00:09:18.298 "dma_device_type": 1 00:09:18.298 }, 00:09:18.298 { 00:09:18.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.298 "dma_device_type": 2 00:09:18.298 } 00:09:18.298 ], 00:09:18.298 "driver_specific": {} 00:09:18.298 } 00:09:18.298 ] 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.298 "name": "Existed_Raid", 00:09:18.298 "uuid": "7b2c26a2-1677-47f4-bc72-7c942635136d", 00:09:18.298 "strip_size_kb": 0, 00:09:18.298 "state": "configuring", 00:09:18.298 "raid_level": "raid1", 00:09:18.298 "superblock": true, 00:09:18.298 "num_base_bdevs": 3, 00:09:18.298 "num_base_bdevs_discovered": 1, 00:09:18.298 "num_base_bdevs_operational": 3, 00:09:18.298 "base_bdevs_list": [ 00:09:18.298 { 00:09:18.298 "name": "BaseBdev1", 00:09:18.298 "uuid": "244681dd-0689-43f1-ac7a-7346b07d70a1", 00:09:18.298 "is_configured": true, 00:09:18.298 "data_offset": 2048, 00:09:18.298 "data_size": 63488 00:09:18.298 }, 00:09:18.298 { 00:09:18.298 "name": "BaseBdev2", 00:09:18.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.298 "is_configured": false, 00:09:18.298 "data_offset": 0, 00:09:18.298 "data_size": 0 00:09:18.298 }, 00:09:18.298 { 00:09:18.298 "name": "BaseBdev3", 00:09:18.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.298 "is_configured": false, 00:09:18.298 "data_offset": 0, 00:09:18.298 "data_size": 0 00:09:18.298 } 00:09:18.298 ] 00:09:18.298 }' 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.298 01:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.558 [2024-10-09 01:29:17.340300] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.558 [2024-10-09 01:29:17.340343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.558 [2024-10-09 01:29:17.352326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.558 [2024-10-09 01:29:17.354496] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.558 [2024-10-09 01:29:17.354544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.558 [2024-10-09 01:29:17.354558] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.558 [2024-10-09 01:29:17.354565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.558 "name": "Existed_Raid", 00:09:18.558 "uuid": "30a1db69-112c-4aa1-a47d-e606203f43f7", 00:09:18.558 "strip_size_kb": 0, 00:09:18.558 "state": "configuring", 00:09:18.558 "raid_level": "raid1", 00:09:18.558 "superblock": true, 00:09:18.558 "num_base_bdevs": 3, 00:09:18.558 "num_base_bdevs_discovered": 1, 00:09:18.558 "num_base_bdevs_operational": 3, 00:09:18.558 "base_bdevs_list": [ 00:09:18.558 { 00:09:18.558 "name": "BaseBdev1", 00:09:18.558 "uuid": "244681dd-0689-43f1-ac7a-7346b07d70a1", 00:09:18.558 "is_configured": true, 00:09:18.558 "data_offset": 2048, 00:09:18.558 "data_size": 63488 00:09:18.558 }, 00:09:18.558 { 00:09:18.558 "name": "BaseBdev2", 00:09:18.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.558 "is_configured": false, 00:09:18.558 "data_offset": 0, 00:09:18.558 "data_size": 0 00:09:18.558 }, 00:09:18.558 { 00:09:18.558 "name": "BaseBdev3", 00:09:18.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.558 "is_configured": false, 00:09:18.558 "data_offset": 0, 00:09:18.558 "data_size": 0 00:09:18.558 } 00:09:18.558 ] 00:09:18.558 }' 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.558 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.128 [2024-10-09 01:29:17.833585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.128 BaseBdev2 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.128 [ 00:09:19.128 { 00:09:19.128 "name": "BaseBdev2", 00:09:19.128 "aliases": [ 00:09:19.128 "b08ef33e-1712-482c-9eaf-ec2e4bc4c90e" 00:09:19.128 ], 00:09:19.128 "product_name": "Malloc disk", 00:09:19.128 "block_size": 512, 00:09:19.128 "num_blocks": 65536, 00:09:19.128 "uuid": "b08ef33e-1712-482c-9eaf-ec2e4bc4c90e", 00:09:19.128 "assigned_rate_limits": { 00:09:19.128 "rw_ios_per_sec": 0, 00:09:19.128 "rw_mbytes_per_sec": 0, 00:09:19.128 "r_mbytes_per_sec": 0, 00:09:19.128 "w_mbytes_per_sec": 0 00:09:19.128 }, 00:09:19.128 "claimed": true, 00:09:19.128 "claim_type": "exclusive_write", 00:09:19.128 "zoned": false, 00:09:19.128 "supported_io_types": { 00:09:19.128 "read": true, 00:09:19.128 "write": true, 00:09:19.128 "unmap": true, 00:09:19.128 "flush": true, 00:09:19.128 "reset": true, 00:09:19.128 "nvme_admin": false, 00:09:19.128 "nvme_io": false, 00:09:19.128 "nvme_io_md": false, 00:09:19.128 "write_zeroes": true, 00:09:19.128 "zcopy": true, 00:09:19.128 "get_zone_info": false, 00:09:19.128 "zone_management": false, 00:09:19.128 "zone_append": false, 00:09:19.128 "compare": false, 00:09:19.128 "compare_and_write": false, 00:09:19.128 "abort": true, 00:09:19.128 "seek_hole": false, 00:09:19.128 "seek_data": false, 00:09:19.128 "copy": true, 00:09:19.128 "nvme_iov_md": false 00:09:19.128 }, 00:09:19.128 "memory_domains": [ 00:09:19.128 { 00:09:19.128 "dma_device_id": "system", 00:09:19.128 "dma_device_type": 1 00:09:19.128 }, 00:09:19.128 { 00:09:19.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.128 "dma_device_type": 2 00:09:19.128 } 00:09:19.128 ], 00:09:19.128 "driver_specific": {} 00:09:19.128 } 00:09:19.128 ] 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.128 "name": "Existed_Raid", 00:09:19.128 "uuid": "30a1db69-112c-4aa1-a47d-e606203f43f7", 00:09:19.128 "strip_size_kb": 0, 00:09:19.128 "state": "configuring", 00:09:19.128 "raid_level": "raid1", 00:09:19.128 "superblock": true, 00:09:19.128 "num_base_bdevs": 3, 00:09:19.128 "num_base_bdevs_discovered": 2, 00:09:19.128 "num_base_bdevs_operational": 3, 00:09:19.128 "base_bdevs_list": [ 00:09:19.128 { 00:09:19.128 "name": "BaseBdev1", 00:09:19.128 "uuid": "244681dd-0689-43f1-ac7a-7346b07d70a1", 00:09:19.128 "is_configured": true, 00:09:19.128 "data_offset": 2048, 00:09:19.128 "data_size": 63488 00:09:19.128 }, 00:09:19.128 { 00:09:19.128 "name": "BaseBdev2", 00:09:19.128 "uuid": "b08ef33e-1712-482c-9eaf-ec2e4bc4c90e", 00:09:19.128 "is_configured": true, 00:09:19.128 "data_offset": 2048, 00:09:19.128 "data_size": 63488 00:09:19.128 }, 00:09:19.128 { 00:09:19.128 "name": "BaseBdev3", 00:09:19.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.128 "is_configured": false, 00:09:19.128 "data_offset": 0, 00:09:19.128 "data_size": 0 00:09:19.128 } 00:09:19.128 ] 00:09:19.128 }' 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.128 01:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.698 [2024-10-09 01:29:18.386344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.698 [2024-10-09 01:29:18.386637] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.698 [2024-10-09 01:29:18.386693] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:19.698 BaseBdev3 00:09:19.698 [2024-10-09 01:29:18.387028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:19.698 [2024-10-09 01:29:18.387190] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.698 [2024-10-09 01:29:18.387209] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:19.698 [2024-10-09 01:29:18.387341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.698 [ 00:09:19.698 { 00:09:19.698 "name": "BaseBdev3", 00:09:19.698 "aliases": [ 00:09:19.698 "ccd799ad-bbae-47d2-868e-46aa49f6f335" 00:09:19.698 ], 00:09:19.698 "product_name": "Malloc disk", 00:09:19.698 "block_size": 512, 00:09:19.698 "num_blocks": 65536, 00:09:19.698 "uuid": "ccd799ad-bbae-47d2-868e-46aa49f6f335", 00:09:19.698 "assigned_rate_limits": { 00:09:19.698 "rw_ios_per_sec": 0, 00:09:19.698 "rw_mbytes_per_sec": 0, 00:09:19.698 "r_mbytes_per_sec": 0, 00:09:19.698 "w_mbytes_per_sec": 0 00:09:19.698 }, 00:09:19.698 "claimed": true, 00:09:19.698 "claim_type": "exclusive_write", 00:09:19.698 "zoned": false, 00:09:19.698 "supported_io_types": { 00:09:19.698 "read": true, 00:09:19.698 "write": true, 00:09:19.698 "unmap": true, 00:09:19.698 "flush": true, 00:09:19.698 "reset": true, 00:09:19.698 "nvme_admin": false, 00:09:19.698 "nvme_io": false, 00:09:19.698 "nvme_io_md": false, 00:09:19.698 "write_zeroes": true, 00:09:19.698 "zcopy": true, 00:09:19.698 "get_zone_info": false, 00:09:19.698 "zone_management": false, 00:09:19.698 "zone_append": false, 00:09:19.698 "compare": false, 00:09:19.698 "compare_and_write": false, 00:09:19.698 "abort": true, 00:09:19.698 "seek_hole": false, 00:09:19.698 "seek_data": false, 00:09:19.698 "copy": true, 00:09:19.698 "nvme_iov_md": false 00:09:19.698 }, 00:09:19.698 "memory_domains": [ 00:09:19.698 { 00:09:19.698 "dma_device_id": "system", 00:09:19.698 "dma_device_type": 1 00:09:19.698 }, 00:09:19.698 { 00:09:19.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.698 "dma_device_type": 2 00:09:19.698 } 00:09:19.698 ], 00:09:19.698 "driver_specific": {} 00:09:19.698 } 00:09:19.698 ] 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.698 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.699 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.699 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.699 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.699 "name": "Existed_Raid", 00:09:19.699 "uuid": "30a1db69-112c-4aa1-a47d-e606203f43f7", 00:09:19.699 "strip_size_kb": 0, 00:09:19.699 "state": "online", 00:09:19.699 "raid_level": "raid1", 00:09:19.699 "superblock": true, 00:09:19.699 "num_base_bdevs": 3, 00:09:19.699 "num_base_bdevs_discovered": 3, 00:09:19.699 "num_base_bdevs_operational": 3, 00:09:19.699 "base_bdevs_list": [ 00:09:19.699 { 00:09:19.699 "name": "BaseBdev1", 00:09:19.699 "uuid": "244681dd-0689-43f1-ac7a-7346b07d70a1", 00:09:19.699 "is_configured": true, 00:09:19.699 "data_offset": 2048, 00:09:19.699 "data_size": 63488 00:09:19.699 }, 00:09:19.699 { 00:09:19.699 "name": "BaseBdev2", 00:09:19.699 "uuid": "b08ef33e-1712-482c-9eaf-ec2e4bc4c90e", 00:09:19.699 "is_configured": true, 00:09:19.699 "data_offset": 2048, 00:09:19.699 "data_size": 63488 00:09:19.699 }, 00:09:19.699 { 00:09:19.699 "name": "BaseBdev3", 00:09:19.699 "uuid": "ccd799ad-bbae-47d2-868e-46aa49f6f335", 00:09:19.699 "is_configured": true, 00:09:19.699 "data_offset": 2048, 00:09:19.699 "data_size": 63488 00:09:19.699 } 00:09:19.699 ] 00:09:19.699 }' 00:09:19.699 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.699 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.958 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.218 [2024-10-09 01:29:18.854801] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.218 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.218 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.218 "name": "Existed_Raid", 00:09:20.218 "aliases": [ 00:09:20.218 "30a1db69-112c-4aa1-a47d-e606203f43f7" 00:09:20.218 ], 00:09:20.218 "product_name": "Raid Volume", 00:09:20.218 "block_size": 512, 00:09:20.218 "num_blocks": 63488, 00:09:20.218 "uuid": "30a1db69-112c-4aa1-a47d-e606203f43f7", 00:09:20.218 "assigned_rate_limits": { 00:09:20.218 "rw_ios_per_sec": 0, 00:09:20.218 "rw_mbytes_per_sec": 0, 00:09:20.218 "r_mbytes_per_sec": 0, 00:09:20.218 "w_mbytes_per_sec": 0 00:09:20.218 }, 00:09:20.218 "claimed": false, 00:09:20.218 "zoned": false, 00:09:20.218 "supported_io_types": { 00:09:20.218 "read": true, 00:09:20.218 "write": true, 00:09:20.218 "unmap": false, 00:09:20.218 "flush": false, 00:09:20.218 "reset": true, 00:09:20.218 "nvme_admin": false, 00:09:20.218 "nvme_io": false, 00:09:20.218 "nvme_io_md": false, 00:09:20.218 "write_zeroes": true, 00:09:20.218 "zcopy": false, 00:09:20.218 "get_zone_info": false, 00:09:20.218 "zone_management": false, 00:09:20.218 "zone_append": false, 00:09:20.218 "compare": false, 00:09:20.218 "compare_and_write": false, 00:09:20.218 "abort": false, 00:09:20.218 "seek_hole": false, 00:09:20.218 "seek_data": false, 00:09:20.218 "copy": false, 00:09:20.218 "nvme_iov_md": false 00:09:20.218 }, 00:09:20.218 "memory_domains": [ 00:09:20.218 { 00:09:20.218 "dma_device_id": "system", 00:09:20.218 "dma_device_type": 1 00:09:20.218 }, 00:09:20.218 { 00:09:20.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.218 "dma_device_type": 2 00:09:20.218 }, 00:09:20.218 { 00:09:20.218 "dma_device_id": "system", 00:09:20.218 "dma_device_type": 1 00:09:20.218 }, 00:09:20.218 { 00:09:20.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.218 "dma_device_type": 2 00:09:20.218 }, 00:09:20.218 { 00:09:20.218 "dma_device_id": "system", 00:09:20.219 "dma_device_type": 1 00:09:20.219 }, 00:09:20.219 { 00:09:20.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.219 "dma_device_type": 2 00:09:20.219 } 00:09:20.219 ], 00:09:20.219 "driver_specific": { 00:09:20.219 "raid": { 00:09:20.219 "uuid": "30a1db69-112c-4aa1-a47d-e606203f43f7", 00:09:20.219 "strip_size_kb": 0, 00:09:20.219 "state": "online", 00:09:20.219 "raid_level": "raid1", 00:09:20.219 "superblock": true, 00:09:20.219 "num_base_bdevs": 3, 00:09:20.219 "num_base_bdevs_discovered": 3, 00:09:20.219 "num_base_bdevs_operational": 3, 00:09:20.219 "base_bdevs_list": [ 00:09:20.219 { 00:09:20.219 "name": "BaseBdev1", 00:09:20.219 "uuid": "244681dd-0689-43f1-ac7a-7346b07d70a1", 00:09:20.219 "is_configured": true, 00:09:20.219 "data_offset": 2048, 00:09:20.219 "data_size": 63488 00:09:20.219 }, 00:09:20.219 { 00:09:20.219 "name": "BaseBdev2", 00:09:20.219 "uuid": "b08ef33e-1712-482c-9eaf-ec2e4bc4c90e", 00:09:20.219 "is_configured": true, 00:09:20.219 "data_offset": 2048, 00:09:20.219 "data_size": 63488 00:09:20.219 }, 00:09:20.219 { 00:09:20.219 "name": "BaseBdev3", 00:09:20.219 "uuid": "ccd799ad-bbae-47d2-868e-46aa49f6f335", 00:09:20.219 "is_configured": true, 00:09:20.219 "data_offset": 2048, 00:09:20.219 "data_size": 63488 00:09:20.219 } 00:09:20.219 ] 00:09:20.219 } 00:09:20.219 } 00:09:20.219 }' 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.219 BaseBdev2 00:09:20.219 BaseBdev3' 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.219 01:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.219 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.479 [2024-10-09 01:29:19.146644] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.479 "name": "Existed_Raid", 00:09:20.479 "uuid": "30a1db69-112c-4aa1-a47d-e606203f43f7", 00:09:20.479 "strip_size_kb": 0, 00:09:20.479 "state": "online", 00:09:20.479 "raid_level": "raid1", 00:09:20.479 "superblock": true, 00:09:20.479 "num_base_bdevs": 3, 00:09:20.479 "num_base_bdevs_discovered": 2, 00:09:20.479 "num_base_bdevs_operational": 2, 00:09:20.479 "base_bdevs_list": [ 00:09:20.479 { 00:09:20.479 "name": null, 00:09:20.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.479 "is_configured": false, 00:09:20.479 "data_offset": 0, 00:09:20.479 "data_size": 63488 00:09:20.479 }, 00:09:20.479 { 00:09:20.479 "name": "BaseBdev2", 00:09:20.479 "uuid": "b08ef33e-1712-482c-9eaf-ec2e4bc4c90e", 00:09:20.479 "is_configured": true, 00:09:20.479 "data_offset": 2048, 00:09:20.479 "data_size": 63488 00:09:20.479 }, 00:09:20.479 { 00:09:20.479 "name": "BaseBdev3", 00:09:20.479 "uuid": "ccd799ad-bbae-47d2-868e-46aa49f6f335", 00:09:20.479 "is_configured": true, 00:09:20.479 "data_offset": 2048, 00:09:20.479 "data_size": 63488 00:09:20.479 } 00:09:20.479 ] 00:09:20.479 }' 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.479 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.739 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:20.739 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.739 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.739 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.739 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.739 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.739 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.999 [2024-10-09 01:29:19.662356] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:20.999 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.000 [2024-10-09 01:29:19.738412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.000 [2024-10-09 01:29:19.738608] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.000 [2024-10-09 01:29:19.759400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.000 [2024-10-09 01:29:19.759515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.000 [2024-10-09 01:29:19.759569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.000 BaseBdev2 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.000 [ 00:09:21.000 { 00:09:21.000 "name": "BaseBdev2", 00:09:21.000 "aliases": [ 00:09:21.000 "4575c134-dd07-4152-b178-f2cdaaff44b7" 00:09:21.000 ], 00:09:21.000 "product_name": "Malloc disk", 00:09:21.000 "block_size": 512, 00:09:21.000 "num_blocks": 65536, 00:09:21.000 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:21.000 "assigned_rate_limits": { 00:09:21.000 "rw_ios_per_sec": 0, 00:09:21.000 "rw_mbytes_per_sec": 0, 00:09:21.000 "r_mbytes_per_sec": 0, 00:09:21.000 "w_mbytes_per_sec": 0 00:09:21.000 }, 00:09:21.000 "claimed": false, 00:09:21.000 "zoned": false, 00:09:21.000 "supported_io_types": { 00:09:21.000 "read": true, 00:09:21.000 "write": true, 00:09:21.000 "unmap": true, 00:09:21.000 "flush": true, 00:09:21.000 "reset": true, 00:09:21.000 "nvme_admin": false, 00:09:21.000 "nvme_io": false, 00:09:21.000 "nvme_io_md": false, 00:09:21.000 "write_zeroes": true, 00:09:21.000 "zcopy": true, 00:09:21.000 "get_zone_info": false, 00:09:21.000 "zone_management": false, 00:09:21.000 "zone_append": false, 00:09:21.000 "compare": false, 00:09:21.000 "compare_and_write": false, 00:09:21.000 "abort": true, 00:09:21.000 "seek_hole": false, 00:09:21.000 "seek_data": false, 00:09:21.000 "copy": true, 00:09:21.000 "nvme_iov_md": false 00:09:21.000 }, 00:09:21.000 "memory_domains": [ 00:09:21.000 { 00:09:21.000 "dma_device_id": "system", 00:09:21.000 "dma_device_type": 1 00:09:21.000 }, 00:09:21.000 { 00:09:21.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.000 "dma_device_type": 2 00:09:21.000 } 00:09:21.000 ], 00:09:21.000 "driver_specific": {} 00:09:21.000 } 00:09:21.000 ] 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.000 BaseBdev3 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.000 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.260 [ 00:09:21.260 { 00:09:21.260 "name": "BaseBdev3", 00:09:21.260 "aliases": [ 00:09:21.260 "88126ab1-4309-4807-9510-9587288e5899" 00:09:21.260 ], 00:09:21.260 "product_name": "Malloc disk", 00:09:21.260 "block_size": 512, 00:09:21.260 "num_blocks": 65536, 00:09:21.260 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:21.260 "assigned_rate_limits": { 00:09:21.260 "rw_ios_per_sec": 0, 00:09:21.260 "rw_mbytes_per_sec": 0, 00:09:21.260 "r_mbytes_per_sec": 0, 00:09:21.260 "w_mbytes_per_sec": 0 00:09:21.260 }, 00:09:21.260 "claimed": false, 00:09:21.260 "zoned": false, 00:09:21.260 "supported_io_types": { 00:09:21.260 "read": true, 00:09:21.260 "write": true, 00:09:21.260 "unmap": true, 00:09:21.260 "flush": true, 00:09:21.260 "reset": true, 00:09:21.260 "nvme_admin": false, 00:09:21.260 "nvme_io": false, 00:09:21.260 "nvme_io_md": false, 00:09:21.260 "write_zeroes": true, 00:09:21.260 "zcopy": true, 00:09:21.260 "get_zone_info": false, 00:09:21.260 "zone_management": false, 00:09:21.260 "zone_append": false, 00:09:21.260 "compare": false, 00:09:21.260 "compare_and_write": false, 00:09:21.260 "abort": true, 00:09:21.260 "seek_hole": false, 00:09:21.260 "seek_data": false, 00:09:21.260 "copy": true, 00:09:21.260 "nvme_iov_md": false 00:09:21.260 }, 00:09:21.260 "memory_domains": [ 00:09:21.260 { 00:09:21.260 "dma_device_id": "system", 00:09:21.260 "dma_device_type": 1 00:09:21.260 }, 00:09:21.260 { 00:09:21.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.260 "dma_device_type": 2 00:09:21.260 } 00:09:21.260 ], 00:09:21.260 "driver_specific": {} 00:09:21.260 } 00:09:21.260 ] 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.260 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 [2024-10-09 01:29:19.931825] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.261 [2024-10-09 01:29:19.931937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.261 [2024-10-09 01:29:19.931964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.261 [2024-10-09 01:29:19.934086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.261 "name": "Existed_Raid", 00:09:21.261 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:21.261 "strip_size_kb": 0, 00:09:21.261 "state": "configuring", 00:09:21.261 "raid_level": "raid1", 00:09:21.261 "superblock": true, 00:09:21.261 "num_base_bdevs": 3, 00:09:21.261 "num_base_bdevs_discovered": 2, 00:09:21.261 "num_base_bdevs_operational": 3, 00:09:21.261 "base_bdevs_list": [ 00:09:21.261 { 00:09:21.261 "name": "BaseBdev1", 00:09:21.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.261 "is_configured": false, 00:09:21.261 "data_offset": 0, 00:09:21.261 "data_size": 0 00:09:21.261 }, 00:09:21.261 { 00:09:21.261 "name": "BaseBdev2", 00:09:21.261 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:21.261 "is_configured": true, 00:09:21.261 "data_offset": 2048, 00:09:21.261 "data_size": 63488 00:09:21.261 }, 00:09:21.261 { 00:09:21.261 "name": "BaseBdev3", 00:09:21.261 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:21.261 "is_configured": true, 00:09:21.261 "data_offset": 2048, 00:09:21.261 "data_size": 63488 00:09:21.261 } 00:09:21.261 ] 00:09:21.261 }' 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.261 01:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.521 [2024-10-09 01:29:20.251872] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.521 "name": "Existed_Raid", 00:09:21.521 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:21.521 "strip_size_kb": 0, 00:09:21.521 "state": "configuring", 00:09:21.521 "raid_level": "raid1", 00:09:21.521 "superblock": true, 00:09:21.521 "num_base_bdevs": 3, 00:09:21.521 "num_base_bdevs_discovered": 1, 00:09:21.521 "num_base_bdevs_operational": 3, 00:09:21.521 "base_bdevs_list": [ 00:09:21.521 { 00:09:21.521 "name": "BaseBdev1", 00:09:21.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.521 "is_configured": false, 00:09:21.521 "data_offset": 0, 00:09:21.521 "data_size": 0 00:09:21.521 }, 00:09:21.521 { 00:09:21.521 "name": null, 00:09:21.521 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:21.521 "is_configured": false, 00:09:21.521 "data_offset": 0, 00:09:21.521 "data_size": 63488 00:09:21.521 }, 00:09:21.521 { 00:09:21.521 "name": "BaseBdev3", 00:09:21.521 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:21.521 "is_configured": true, 00:09:21.521 "data_offset": 2048, 00:09:21.521 "data_size": 63488 00:09:21.521 } 00:09:21.521 ] 00:09:21.521 }' 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.521 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 [2024-10-09 01:29:20.776534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.091 BaseBdev1 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 [ 00:09:22.091 { 00:09:22.091 "name": "BaseBdev1", 00:09:22.091 "aliases": [ 00:09:22.091 "a08a8cdb-abf3-41dc-a82e-49525d206179" 00:09:22.091 ], 00:09:22.091 "product_name": "Malloc disk", 00:09:22.091 "block_size": 512, 00:09:22.091 "num_blocks": 65536, 00:09:22.091 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:22.091 "assigned_rate_limits": { 00:09:22.091 "rw_ios_per_sec": 0, 00:09:22.091 "rw_mbytes_per_sec": 0, 00:09:22.091 "r_mbytes_per_sec": 0, 00:09:22.091 "w_mbytes_per_sec": 0 00:09:22.091 }, 00:09:22.091 "claimed": true, 00:09:22.091 "claim_type": "exclusive_write", 00:09:22.091 "zoned": false, 00:09:22.091 "supported_io_types": { 00:09:22.091 "read": true, 00:09:22.091 "write": true, 00:09:22.091 "unmap": true, 00:09:22.091 "flush": true, 00:09:22.091 "reset": true, 00:09:22.091 "nvme_admin": false, 00:09:22.091 "nvme_io": false, 00:09:22.091 "nvme_io_md": false, 00:09:22.091 "write_zeroes": true, 00:09:22.091 "zcopy": true, 00:09:22.091 "get_zone_info": false, 00:09:22.091 "zone_management": false, 00:09:22.091 "zone_append": false, 00:09:22.091 "compare": false, 00:09:22.091 "compare_and_write": false, 00:09:22.091 "abort": true, 00:09:22.091 "seek_hole": false, 00:09:22.091 "seek_data": false, 00:09:22.091 "copy": true, 00:09:22.091 "nvme_iov_md": false 00:09:22.091 }, 00:09:22.091 "memory_domains": [ 00:09:22.091 { 00:09:22.091 "dma_device_id": "system", 00:09:22.091 "dma_device_type": 1 00:09:22.091 }, 00:09:22.091 { 00:09:22.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.091 "dma_device_type": 2 00:09:22.091 } 00:09:22.091 ], 00:09:22.091 "driver_specific": {} 00:09:22.091 } 00:09:22.091 ] 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.091 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.091 "name": "Existed_Raid", 00:09:22.091 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:22.091 "strip_size_kb": 0, 00:09:22.091 "state": "configuring", 00:09:22.091 "raid_level": "raid1", 00:09:22.091 "superblock": true, 00:09:22.091 "num_base_bdevs": 3, 00:09:22.091 "num_base_bdevs_discovered": 2, 00:09:22.091 "num_base_bdevs_operational": 3, 00:09:22.091 "base_bdevs_list": [ 00:09:22.091 { 00:09:22.091 "name": "BaseBdev1", 00:09:22.091 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:22.091 "is_configured": true, 00:09:22.091 "data_offset": 2048, 00:09:22.091 "data_size": 63488 00:09:22.091 }, 00:09:22.091 { 00:09:22.092 "name": null, 00:09:22.092 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:22.092 "is_configured": false, 00:09:22.092 "data_offset": 0, 00:09:22.092 "data_size": 63488 00:09:22.092 }, 00:09:22.092 { 00:09:22.092 "name": "BaseBdev3", 00:09:22.092 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:22.092 "is_configured": true, 00:09:22.092 "data_offset": 2048, 00:09:22.092 "data_size": 63488 00:09:22.092 } 00:09:22.092 ] 00:09:22.092 }' 00:09:22.092 01:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.092 01:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.351 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.351 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:22.351 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.351 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.351 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.610 [2024-10-09 01:29:21.264712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.610 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.610 "name": "Existed_Raid", 00:09:22.610 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:22.610 "strip_size_kb": 0, 00:09:22.610 "state": "configuring", 00:09:22.610 "raid_level": "raid1", 00:09:22.610 "superblock": true, 00:09:22.610 "num_base_bdevs": 3, 00:09:22.610 "num_base_bdevs_discovered": 1, 00:09:22.610 "num_base_bdevs_operational": 3, 00:09:22.610 "base_bdevs_list": [ 00:09:22.610 { 00:09:22.610 "name": "BaseBdev1", 00:09:22.610 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:22.610 "is_configured": true, 00:09:22.610 "data_offset": 2048, 00:09:22.610 "data_size": 63488 00:09:22.610 }, 00:09:22.610 { 00:09:22.610 "name": null, 00:09:22.610 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:22.610 "is_configured": false, 00:09:22.610 "data_offset": 0, 00:09:22.610 "data_size": 63488 00:09:22.610 }, 00:09:22.610 { 00:09:22.610 "name": null, 00:09:22.611 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:22.611 "is_configured": false, 00:09:22.611 "data_offset": 0, 00:09:22.611 "data_size": 63488 00:09:22.611 } 00:09:22.611 ] 00:09:22.611 }' 00:09:22.611 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.611 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.870 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.870 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:22.870 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.870 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.870 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.129 [2024-10-09 01:29:21.772846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.129 "name": "Existed_Raid", 00:09:23.129 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:23.129 "strip_size_kb": 0, 00:09:23.129 "state": "configuring", 00:09:23.129 "raid_level": "raid1", 00:09:23.129 "superblock": true, 00:09:23.129 "num_base_bdevs": 3, 00:09:23.129 "num_base_bdevs_discovered": 2, 00:09:23.129 "num_base_bdevs_operational": 3, 00:09:23.129 "base_bdevs_list": [ 00:09:23.129 { 00:09:23.129 "name": "BaseBdev1", 00:09:23.129 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:23.129 "is_configured": true, 00:09:23.129 "data_offset": 2048, 00:09:23.129 "data_size": 63488 00:09:23.129 }, 00:09:23.129 { 00:09:23.129 "name": null, 00:09:23.129 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:23.129 "is_configured": false, 00:09:23.129 "data_offset": 0, 00:09:23.129 "data_size": 63488 00:09:23.129 }, 00:09:23.129 { 00:09:23.129 "name": "BaseBdev3", 00:09:23.129 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:23.129 "is_configured": true, 00:09:23.129 "data_offset": 2048, 00:09:23.129 "data_size": 63488 00:09:23.129 } 00:09:23.129 ] 00:09:23.129 }' 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.129 01:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.389 [2024-10-09 01:29:22.249000] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.389 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.649 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.649 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.649 "name": "Existed_Raid", 00:09:23.649 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:23.649 "strip_size_kb": 0, 00:09:23.649 "state": "configuring", 00:09:23.649 "raid_level": "raid1", 00:09:23.649 "superblock": true, 00:09:23.649 "num_base_bdevs": 3, 00:09:23.649 "num_base_bdevs_discovered": 1, 00:09:23.649 "num_base_bdevs_operational": 3, 00:09:23.649 "base_bdevs_list": [ 00:09:23.649 { 00:09:23.649 "name": null, 00:09:23.649 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:23.649 "is_configured": false, 00:09:23.649 "data_offset": 0, 00:09:23.649 "data_size": 63488 00:09:23.649 }, 00:09:23.649 { 00:09:23.649 "name": null, 00:09:23.649 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:23.649 "is_configured": false, 00:09:23.649 "data_offset": 0, 00:09:23.649 "data_size": 63488 00:09:23.649 }, 00:09:23.649 { 00:09:23.649 "name": "BaseBdev3", 00:09:23.649 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:23.649 "is_configured": true, 00:09:23.649 "data_offset": 2048, 00:09:23.649 "data_size": 63488 00:09:23.649 } 00:09:23.649 ] 00:09:23.649 }' 00:09:23.649 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.649 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.908 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.909 [2024-10-09 01:29:22.716211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.909 "name": "Existed_Raid", 00:09:23.909 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:23.909 "strip_size_kb": 0, 00:09:23.909 "state": "configuring", 00:09:23.909 "raid_level": "raid1", 00:09:23.909 "superblock": true, 00:09:23.909 "num_base_bdevs": 3, 00:09:23.909 "num_base_bdevs_discovered": 2, 00:09:23.909 "num_base_bdevs_operational": 3, 00:09:23.909 "base_bdevs_list": [ 00:09:23.909 { 00:09:23.909 "name": null, 00:09:23.909 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:23.909 "is_configured": false, 00:09:23.909 "data_offset": 0, 00:09:23.909 "data_size": 63488 00:09:23.909 }, 00:09:23.909 { 00:09:23.909 "name": "BaseBdev2", 00:09:23.909 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:23.909 "is_configured": true, 00:09:23.909 "data_offset": 2048, 00:09:23.909 "data_size": 63488 00:09:23.909 }, 00:09:23.909 { 00:09:23.909 "name": "BaseBdev3", 00:09:23.909 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:23.909 "is_configured": true, 00:09:23.909 "data_offset": 2048, 00:09:23.909 "data_size": 63488 00:09:23.909 } 00:09:23.909 ] 00:09:23.909 }' 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.909 01:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a08a8cdb-abf3-41dc-a82e-49525d206179 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.479 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.479 [2024-10-09 01:29:23.223996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:24.479 [2024-10-09 01:29:23.224237] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:24.479 [2024-10-09 01:29:23.224289] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:24.479 NewBaseBdev 00:09:24.479 [2024-10-09 01:29:23.224631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:24.479 [2024-10-09 01:29:23.224813] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:24.480 [2024-10-09 01:29:23.224854] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:24.480 [2024-10-09 01:29:23.224994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.480 [ 00:09:24.480 { 00:09:24.480 "name": "NewBaseBdev", 00:09:24.480 "aliases": [ 00:09:24.480 "a08a8cdb-abf3-41dc-a82e-49525d206179" 00:09:24.480 ], 00:09:24.480 "product_name": "Malloc disk", 00:09:24.480 "block_size": 512, 00:09:24.480 "num_blocks": 65536, 00:09:24.480 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:24.480 "assigned_rate_limits": { 00:09:24.480 "rw_ios_per_sec": 0, 00:09:24.480 "rw_mbytes_per_sec": 0, 00:09:24.480 "r_mbytes_per_sec": 0, 00:09:24.480 "w_mbytes_per_sec": 0 00:09:24.480 }, 00:09:24.480 "claimed": true, 00:09:24.480 "claim_type": "exclusive_write", 00:09:24.480 "zoned": false, 00:09:24.480 "supported_io_types": { 00:09:24.480 "read": true, 00:09:24.480 "write": true, 00:09:24.480 "unmap": true, 00:09:24.480 "flush": true, 00:09:24.480 "reset": true, 00:09:24.480 "nvme_admin": false, 00:09:24.480 "nvme_io": false, 00:09:24.480 "nvme_io_md": false, 00:09:24.480 "write_zeroes": true, 00:09:24.480 "zcopy": true, 00:09:24.480 "get_zone_info": false, 00:09:24.480 "zone_management": false, 00:09:24.480 "zone_append": false, 00:09:24.480 "compare": false, 00:09:24.480 "compare_and_write": false, 00:09:24.480 "abort": true, 00:09:24.480 "seek_hole": false, 00:09:24.480 "seek_data": false, 00:09:24.480 "copy": true, 00:09:24.480 "nvme_iov_md": false 00:09:24.480 }, 00:09:24.480 "memory_domains": [ 00:09:24.480 { 00:09:24.480 "dma_device_id": "system", 00:09:24.480 "dma_device_type": 1 00:09:24.480 }, 00:09:24.480 { 00:09:24.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.480 "dma_device_type": 2 00:09:24.480 } 00:09:24.480 ], 00:09:24.480 "driver_specific": {} 00:09:24.480 } 00:09:24.480 ] 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.480 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.480 "name": "Existed_Raid", 00:09:24.480 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:24.480 "strip_size_kb": 0, 00:09:24.480 "state": "online", 00:09:24.480 "raid_level": "raid1", 00:09:24.480 "superblock": true, 00:09:24.480 "num_base_bdevs": 3, 00:09:24.480 "num_base_bdevs_discovered": 3, 00:09:24.480 "num_base_bdevs_operational": 3, 00:09:24.480 "base_bdevs_list": [ 00:09:24.480 { 00:09:24.480 "name": "NewBaseBdev", 00:09:24.480 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:24.481 "is_configured": true, 00:09:24.481 "data_offset": 2048, 00:09:24.481 "data_size": 63488 00:09:24.481 }, 00:09:24.481 { 00:09:24.481 "name": "BaseBdev2", 00:09:24.481 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:24.481 "is_configured": true, 00:09:24.481 "data_offset": 2048, 00:09:24.481 "data_size": 63488 00:09:24.481 }, 00:09:24.481 { 00:09:24.481 "name": "BaseBdev3", 00:09:24.481 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:24.481 "is_configured": true, 00:09:24.481 "data_offset": 2048, 00:09:24.481 "data_size": 63488 00:09:24.481 } 00:09:24.481 ] 00:09:24.481 }' 00:09:24.481 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.481 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.051 [2024-10-09 01:29:23.732443] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.051 "name": "Existed_Raid", 00:09:25.051 "aliases": [ 00:09:25.051 "bb2b382e-e1e7-44b5-9536-87c6a65b2eff" 00:09:25.051 ], 00:09:25.051 "product_name": "Raid Volume", 00:09:25.051 "block_size": 512, 00:09:25.051 "num_blocks": 63488, 00:09:25.051 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:25.051 "assigned_rate_limits": { 00:09:25.051 "rw_ios_per_sec": 0, 00:09:25.051 "rw_mbytes_per_sec": 0, 00:09:25.051 "r_mbytes_per_sec": 0, 00:09:25.051 "w_mbytes_per_sec": 0 00:09:25.051 }, 00:09:25.051 "claimed": false, 00:09:25.051 "zoned": false, 00:09:25.051 "supported_io_types": { 00:09:25.051 "read": true, 00:09:25.051 "write": true, 00:09:25.051 "unmap": false, 00:09:25.051 "flush": false, 00:09:25.051 "reset": true, 00:09:25.051 "nvme_admin": false, 00:09:25.051 "nvme_io": false, 00:09:25.051 "nvme_io_md": false, 00:09:25.051 "write_zeroes": true, 00:09:25.051 "zcopy": false, 00:09:25.051 "get_zone_info": false, 00:09:25.051 "zone_management": false, 00:09:25.051 "zone_append": false, 00:09:25.051 "compare": false, 00:09:25.051 "compare_and_write": false, 00:09:25.051 "abort": false, 00:09:25.051 "seek_hole": false, 00:09:25.051 "seek_data": false, 00:09:25.051 "copy": false, 00:09:25.051 "nvme_iov_md": false 00:09:25.051 }, 00:09:25.051 "memory_domains": [ 00:09:25.051 { 00:09:25.051 "dma_device_id": "system", 00:09:25.051 "dma_device_type": 1 00:09:25.051 }, 00:09:25.051 { 00:09:25.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.051 "dma_device_type": 2 00:09:25.051 }, 00:09:25.051 { 00:09:25.051 "dma_device_id": "system", 00:09:25.051 "dma_device_type": 1 00:09:25.051 }, 00:09:25.051 { 00:09:25.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.051 "dma_device_type": 2 00:09:25.051 }, 00:09:25.051 { 00:09:25.051 "dma_device_id": "system", 00:09:25.051 "dma_device_type": 1 00:09:25.051 }, 00:09:25.051 { 00:09:25.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.051 "dma_device_type": 2 00:09:25.051 } 00:09:25.051 ], 00:09:25.051 "driver_specific": { 00:09:25.051 "raid": { 00:09:25.051 "uuid": "bb2b382e-e1e7-44b5-9536-87c6a65b2eff", 00:09:25.051 "strip_size_kb": 0, 00:09:25.051 "state": "online", 00:09:25.051 "raid_level": "raid1", 00:09:25.051 "superblock": true, 00:09:25.051 "num_base_bdevs": 3, 00:09:25.051 "num_base_bdevs_discovered": 3, 00:09:25.051 "num_base_bdevs_operational": 3, 00:09:25.051 "base_bdevs_list": [ 00:09:25.051 { 00:09:25.051 "name": "NewBaseBdev", 00:09:25.051 "uuid": "a08a8cdb-abf3-41dc-a82e-49525d206179", 00:09:25.051 "is_configured": true, 00:09:25.051 "data_offset": 2048, 00:09:25.051 "data_size": 63488 00:09:25.051 }, 00:09:25.051 { 00:09:25.051 "name": "BaseBdev2", 00:09:25.051 "uuid": "4575c134-dd07-4152-b178-f2cdaaff44b7", 00:09:25.051 "is_configured": true, 00:09:25.051 "data_offset": 2048, 00:09:25.051 "data_size": 63488 00:09:25.051 }, 00:09:25.051 { 00:09:25.051 "name": "BaseBdev3", 00:09:25.051 "uuid": "88126ab1-4309-4807-9510-9587288e5899", 00:09:25.051 "is_configured": true, 00:09:25.051 "data_offset": 2048, 00:09:25.051 "data_size": 63488 00:09:25.051 } 00:09:25.051 ] 00:09:25.051 } 00:09:25.051 } 00:09:25.051 }' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:25.051 BaseBdev2 00:09:25.051 BaseBdev3' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.051 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.311 [2024-10-09 01:29:23.992207] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.311 [2024-10-09 01:29:23.992268] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.311 [2024-10-09 01:29:23.992350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.311 [2024-10-09 01:29:23.992653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.311 [2024-10-09 01:29:23.992708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80167 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80167 ']' 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80167 00:09:25.311 01:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:25.311 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:25.311 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80167 00:09:25.311 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:25.311 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:25.311 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80167' 00:09:25.311 killing process with pid 80167 00:09:25.311 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80167 00:09:25.311 [2024-10-09 01:29:24.035276] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.311 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80167 00:09:25.311 [2024-10-09 01:29:24.093125] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:25.571 01:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:25.571 00:09:25.571 real 0m8.937s 00:09:25.571 user 0m14.962s 00:09:25.571 sys 0m1.899s 00:09:25.571 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.571 01:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.571 ************************************ 00:09:25.571 END TEST raid_state_function_test_sb 00:09:25.571 ************************************ 00:09:25.832 01:29:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:25.832 01:29:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:25.832 01:29:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.832 01:29:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:25.832 ************************************ 00:09:25.832 START TEST raid_superblock_test 00:09:25.832 ************************************ 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80776 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80776 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 80776 ']' 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.832 01:29:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.832 [2024-10-09 01:29:24.626660] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:25.832 [2024-10-09 01:29:24.626853] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80776 ] 00:09:26.092 [2024-10-09 01:29:24.758764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:26.092 [2024-10-09 01:29:24.787897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.092 [2024-10-09 01:29:24.858235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.092 [2024-10-09 01:29:24.934300] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.092 [2024-10-09 01:29:24.934434] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.662 malloc1 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.662 [2024-10-09 01:29:25.473298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:26.662 [2024-10-09 01:29:25.473458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.662 [2024-10-09 01:29:25.473505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:26.662 [2024-10-09 01:29:25.473575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.662 [2024-10-09 01:29:25.475954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.662 [2024-10-09 01:29:25.476025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:26.662 pt1 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.662 malloc2 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.662 [2024-10-09 01:29:25.519812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.662 [2024-10-09 01:29:25.519952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.662 [2024-10-09 01:29:25.520005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:26.662 [2024-10-09 01:29:25.520052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.662 [2024-10-09 01:29:25.523220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.662 [2024-10-09 01:29:25.523290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.662 pt2 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:26.662 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.663 malloc3 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.663 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.922 [2024-10-09 01:29:25.554594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.922 [2024-10-09 01:29:25.554699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.922 [2024-10-09 01:29:25.554737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:26.922 [2024-10-09 01:29:25.554766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.922 [2024-10-09 01:29:25.557145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.922 [2024-10-09 01:29:25.557215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.922 pt3 00:09:26.922 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.922 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.923 [2024-10-09 01:29:25.566668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:26.923 [2024-10-09 01:29:25.568777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.923 [2024-10-09 01:29:25.568885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:26.923 [2024-10-09 01:29:25.569051] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:26.923 [2024-10-09 01:29:25.569100] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:26.923 [2024-10-09 01:29:25.569389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:26.923 [2024-10-09 01:29:25.569587] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:26.923 [2024-10-09 01:29:25.569642] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:26.923 [2024-10-09 01:29:25.569818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.923 "name": "raid_bdev1", 00:09:26.923 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:26.923 "strip_size_kb": 0, 00:09:26.923 "state": "online", 00:09:26.923 "raid_level": "raid1", 00:09:26.923 "superblock": true, 00:09:26.923 "num_base_bdevs": 3, 00:09:26.923 "num_base_bdevs_discovered": 3, 00:09:26.923 "num_base_bdevs_operational": 3, 00:09:26.923 "base_bdevs_list": [ 00:09:26.923 { 00:09:26.923 "name": "pt1", 00:09:26.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.923 "is_configured": true, 00:09:26.923 "data_offset": 2048, 00:09:26.923 "data_size": 63488 00:09:26.923 }, 00:09:26.923 { 00:09:26.923 "name": "pt2", 00:09:26.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.923 "is_configured": true, 00:09:26.923 "data_offset": 2048, 00:09:26.923 "data_size": 63488 00:09:26.923 }, 00:09:26.923 { 00:09:26.923 "name": "pt3", 00:09:26.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.923 "is_configured": true, 00:09:26.923 "data_offset": 2048, 00:09:26.923 "data_size": 63488 00:09:26.923 } 00:09:26.923 ] 00:09:26.923 }' 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.923 01:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.182 [2024-10-09 01:29:26.031082] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.182 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.182 "name": "raid_bdev1", 00:09:27.182 "aliases": [ 00:09:27.182 "efd7b49e-c255-499a-9c2d-c4a9ad08ac03" 00:09:27.182 ], 00:09:27.182 "product_name": "Raid Volume", 00:09:27.182 "block_size": 512, 00:09:27.182 "num_blocks": 63488, 00:09:27.182 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:27.182 "assigned_rate_limits": { 00:09:27.182 "rw_ios_per_sec": 0, 00:09:27.182 "rw_mbytes_per_sec": 0, 00:09:27.182 "r_mbytes_per_sec": 0, 00:09:27.182 "w_mbytes_per_sec": 0 00:09:27.182 }, 00:09:27.182 "claimed": false, 00:09:27.182 "zoned": false, 00:09:27.182 "supported_io_types": { 00:09:27.182 "read": true, 00:09:27.182 "write": true, 00:09:27.182 "unmap": false, 00:09:27.182 "flush": false, 00:09:27.182 "reset": true, 00:09:27.182 "nvme_admin": false, 00:09:27.182 "nvme_io": false, 00:09:27.182 "nvme_io_md": false, 00:09:27.182 "write_zeroes": true, 00:09:27.182 "zcopy": false, 00:09:27.182 "get_zone_info": false, 00:09:27.182 "zone_management": false, 00:09:27.182 "zone_append": false, 00:09:27.182 "compare": false, 00:09:27.182 "compare_and_write": false, 00:09:27.182 "abort": false, 00:09:27.182 "seek_hole": false, 00:09:27.182 "seek_data": false, 00:09:27.182 "copy": false, 00:09:27.182 "nvme_iov_md": false 00:09:27.182 }, 00:09:27.182 "memory_domains": [ 00:09:27.182 { 00:09:27.182 "dma_device_id": "system", 00:09:27.182 "dma_device_type": 1 00:09:27.182 }, 00:09:27.182 { 00:09:27.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.182 "dma_device_type": 2 00:09:27.182 }, 00:09:27.182 { 00:09:27.182 "dma_device_id": "system", 00:09:27.182 "dma_device_type": 1 00:09:27.182 }, 00:09:27.182 { 00:09:27.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.182 "dma_device_type": 2 00:09:27.182 }, 00:09:27.182 { 00:09:27.182 "dma_device_id": "system", 00:09:27.182 "dma_device_type": 1 00:09:27.182 }, 00:09:27.182 { 00:09:27.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.182 "dma_device_type": 2 00:09:27.182 } 00:09:27.182 ], 00:09:27.182 "driver_specific": { 00:09:27.182 "raid": { 00:09:27.182 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:27.182 "strip_size_kb": 0, 00:09:27.182 "state": "online", 00:09:27.182 "raid_level": "raid1", 00:09:27.182 "superblock": true, 00:09:27.182 "num_base_bdevs": 3, 00:09:27.182 "num_base_bdevs_discovered": 3, 00:09:27.182 "num_base_bdevs_operational": 3, 00:09:27.182 "base_bdevs_list": [ 00:09:27.182 { 00:09:27.182 "name": "pt1", 00:09:27.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.182 "is_configured": true, 00:09:27.182 "data_offset": 2048, 00:09:27.182 "data_size": 63488 00:09:27.182 }, 00:09:27.182 { 00:09:27.182 "name": "pt2", 00:09:27.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.182 "is_configured": true, 00:09:27.182 "data_offset": 2048, 00:09:27.183 "data_size": 63488 00:09:27.183 }, 00:09:27.183 { 00:09:27.183 "name": "pt3", 00:09:27.183 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.183 "is_configured": true, 00:09:27.183 "data_offset": 2048, 00:09:27.183 "data_size": 63488 00:09:27.183 } 00:09:27.183 ] 00:09:27.183 } 00:09:27.183 } 00:09:27.183 }' 00:09:27.183 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:27.442 pt2 00:09:27.442 pt3' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.442 [2024-10-09 01:29:26.307020] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.442 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.702 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=efd7b49e-c255-499a-9c2d-c4a9ad08ac03 00:09:27.702 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z efd7b49e-c255-499a-9c2d-c4a9ad08ac03 ']' 00:09:27.702 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.702 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.702 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.702 [2024-10-09 01:29:26.354798] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.702 [2024-10-09 01:29:26.354862] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.702 [2024-10-09 01:29:26.354951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.702 [2024-10-09 01:29:26.355065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.702 [2024-10-09 01:29:26.355112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:27.702 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.702 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.703 [2024-10-09 01:29:26.506868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:27.703 [2024-10-09 01:29:26.508955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:27.703 [2024-10-09 01:29:26.509039] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:27.703 [2024-10-09 01:29:26.509108] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:27.703 [2024-10-09 01:29:26.509219] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:27.703 [2024-10-09 01:29:26.509285] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:27.703 [2024-10-09 01:29:26.509328] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.703 [2024-10-09 01:29:26.509388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:27.703 request: 00:09:27.703 { 00:09:27.703 "name": "raid_bdev1", 00:09:27.703 "raid_level": "raid1", 00:09:27.703 "base_bdevs": [ 00:09:27.703 "malloc1", 00:09:27.703 "malloc2", 00:09:27.703 "malloc3" 00:09:27.703 ], 00:09:27.703 "superblock": false, 00:09:27.703 "method": "bdev_raid_create", 00:09:27.703 "req_id": 1 00:09:27.703 } 00:09:27.703 Got JSON-RPC error response 00:09:27.703 response: 00:09:27.703 { 00:09:27.703 "code": -17, 00:09:27.703 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:27.703 } 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.703 [2024-10-09 01:29:26.570854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:27.703 [2024-10-09 01:29:26.570940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.703 [2024-10-09 01:29:26.570978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:27.703 [2024-10-09 01:29:26.571006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.703 [2024-10-09 01:29:26.573408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.703 [2024-10-09 01:29:26.573476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:27.703 [2024-10-09 01:29:26.573570] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:27.703 [2024-10-09 01:29:26.573637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:27.703 pt1 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.703 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.968 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.968 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.968 "name": "raid_bdev1", 00:09:27.968 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:27.968 "strip_size_kb": 0, 00:09:27.968 "state": "configuring", 00:09:27.968 "raid_level": "raid1", 00:09:27.968 "superblock": true, 00:09:27.968 "num_base_bdevs": 3, 00:09:27.968 "num_base_bdevs_discovered": 1, 00:09:27.968 "num_base_bdevs_operational": 3, 00:09:27.968 "base_bdevs_list": [ 00:09:27.968 { 00:09:27.968 "name": "pt1", 00:09:27.968 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.968 "is_configured": true, 00:09:27.968 "data_offset": 2048, 00:09:27.968 "data_size": 63488 00:09:27.968 }, 00:09:27.968 { 00:09:27.968 "name": null, 00:09:27.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.968 "is_configured": false, 00:09:27.968 "data_offset": 2048, 00:09:27.968 "data_size": 63488 00:09:27.968 }, 00:09:27.968 { 00:09:27.968 "name": null, 00:09:27.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.968 "is_configured": false, 00:09:27.968 "data_offset": 2048, 00:09:27.968 "data_size": 63488 00:09:27.968 } 00:09:27.968 ] 00:09:27.968 }' 00:09:27.968 01:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.968 01:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.246 [2024-10-09 01:29:27.062991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.246 [2024-10-09 01:29:27.063091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.246 [2024-10-09 01:29:27.063158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:28.246 [2024-10-09 01:29:27.063190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.246 [2024-10-09 01:29:27.063630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.246 [2024-10-09 01:29:27.063685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.246 [2024-10-09 01:29:27.063776] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:28.246 [2024-10-09 01:29:27.063824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.246 pt2 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.246 [2024-10-09 01:29:27.075022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.246 "name": "raid_bdev1", 00:09:28.246 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:28.246 "strip_size_kb": 0, 00:09:28.246 "state": "configuring", 00:09:28.246 "raid_level": "raid1", 00:09:28.246 "superblock": true, 00:09:28.246 "num_base_bdevs": 3, 00:09:28.246 "num_base_bdevs_discovered": 1, 00:09:28.246 "num_base_bdevs_operational": 3, 00:09:28.246 "base_bdevs_list": [ 00:09:28.246 { 00:09:28.246 "name": "pt1", 00:09:28.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.246 "is_configured": true, 00:09:28.246 "data_offset": 2048, 00:09:28.246 "data_size": 63488 00:09:28.246 }, 00:09:28.246 { 00:09:28.246 "name": null, 00:09:28.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.246 "is_configured": false, 00:09:28.246 "data_offset": 0, 00:09:28.246 "data_size": 63488 00:09:28.246 }, 00:09:28.246 { 00:09:28.246 "name": null, 00:09:28.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.246 "is_configured": false, 00:09:28.246 "data_offset": 2048, 00:09:28.246 "data_size": 63488 00:09:28.246 } 00:09:28.246 ] 00:09:28.246 }' 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.246 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.832 [2024-10-09 01:29:27.507094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.832 [2024-10-09 01:29:27.507196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.832 [2024-10-09 01:29:27.507227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:28.832 [2024-10-09 01:29:27.507270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.832 [2024-10-09 01:29:27.507666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.832 [2024-10-09 01:29:27.507721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.832 [2024-10-09 01:29:27.507807] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:28.832 [2024-10-09 01:29:27.507864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.832 pt2 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.832 [2024-10-09 01:29:27.519102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:28.832 [2024-10-09 01:29:27.519185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.832 [2024-10-09 01:29:27.519228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:28.832 [2024-10-09 01:29:27.519256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.832 [2024-10-09 01:29:27.519612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.832 [2024-10-09 01:29:27.519666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:28.832 [2024-10-09 01:29:27.519738] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:28.832 [2024-10-09 01:29:27.519763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:28.832 [2024-10-09 01:29:27.519854] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.832 [2024-10-09 01:29:27.519866] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:28.832 [2024-10-09 01:29:27.520095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:28.832 [2024-10-09 01:29:27.520218] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.832 [2024-10-09 01:29:27.520227] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:28.832 [2024-10-09 01:29:27.520327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.832 pt3 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.832 "name": "raid_bdev1", 00:09:28.832 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:28.832 "strip_size_kb": 0, 00:09:28.832 "state": "online", 00:09:28.832 "raid_level": "raid1", 00:09:28.832 "superblock": true, 00:09:28.832 "num_base_bdevs": 3, 00:09:28.832 "num_base_bdevs_discovered": 3, 00:09:28.832 "num_base_bdevs_operational": 3, 00:09:28.832 "base_bdevs_list": [ 00:09:28.832 { 00:09:28.832 "name": "pt1", 00:09:28.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.832 "is_configured": true, 00:09:28.832 "data_offset": 2048, 00:09:28.832 "data_size": 63488 00:09:28.832 }, 00:09:28.832 { 00:09:28.832 "name": "pt2", 00:09:28.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.832 "is_configured": true, 00:09:28.832 "data_offset": 2048, 00:09:28.832 "data_size": 63488 00:09:28.832 }, 00:09:28.832 { 00:09:28.832 "name": "pt3", 00:09:28.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.832 "is_configured": true, 00:09:28.832 "data_offset": 2048, 00:09:28.832 "data_size": 63488 00:09:28.832 } 00:09:28.832 ] 00:09:28.832 }' 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.832 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.091 [2024-10-09 01:29:27.919504] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.091 "name": "raid_bdev1", 00:09:29.091 "aliases": [ 00:09:29.091 "efd7b49e-c255-499a-9c2d-c4a9ad08ac03" 00:09:29.091 ], 00:09:29.091 "product_name": "Raid Volume", 00:09:29.091 "block_size": 512, 00:09:29.091 "num_blocks": 63488, 00:09:29.091 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:29.091 "assigned_rate_limits": { 00:09:29.091 "rw_ios_per_sec": 0, 00:09:29.091 "rw_mbytes_per_sec": 0, 00:09:29.091 "r_mbytes_per_sec": 0, 00:09:29.091 "w_mbytes_per_sec": 0 00:09:29.091 }, 00:09:29.091 "claimed": false, 00:09:29.091 "zoned": false, 00:09:29.091 "supported_io_types": { 00:09:29.091 "read": true, 00:09:29.091 "write": true, 00:09:29.091 "unmap": false, 00:09:29.091 "flush": false, 00:09:29.091 "reset": true, 00:09:29.091 "nvme_admin": false, 00:09:29.091 "nvme_io": false, 00:09:29.091 "nvme_io_md": false, 00:09:29.091 "write_zeroes": true, 00:09:29.091 "zcopy": false, 00:09:29.091 "get_zone_info": false, 00:09:29.091 "zone_management": false, 00:09:29.091 "zone_append": false, 00:09:29.091 "compare": false, 00:09:29.091 "compare_and_write": false, 00:09:29.091 "abort": false, 00:09:29.091 "seek_hole": false, 00:09:29.091 "seek_data": false, 00:09:29.091 "copy": false, 00:09:29.091 "nvme_iov_md": false 00:09:29.091 }, 00:09:29.091 "memory_domains": [ 00:09:29.091 { 00:09:29.091 "dma_device_id": "system", 00:09:29.091 "dma_device_type": 1 00:09:29.091 }, 00:09:29.091 { 00:09:29.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.091 "dma_device_type": 2 00:09:29.091 }, 00:09:29.091 { 00:09:29.091 "dma_device_id": "system", 00:09:29.091 "dma_device_type": 1 00:09:29.091 }, 00:09:29.091 { 00:09:29.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.091 "dma_device_type": 2 00:09:29.091 }, 00:09:29.091 { 00:09:29.091 "dma_device_id": "system", 00:09:29.091 "dma_device_type": 1 00:09:29.091 }, 00:09:29.091 { 00:09:29.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.091 "dma_device_type": 2 00:09:29.091 } 00:09:29.091 ], 00:09:29.091 "driver_specific": { 00:09:29.091 "raid": { 00:09:29.091 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:29.091 "strip_size_kb": 0, 00:09:29.091 "state": "online", 00:09:29.091 "raid_level": "raid1", 00:09:29.091 "superblock": true, 00:09:29.091 "num_base_bdevs": 3, 00:09:29.091 "num_base_bdevs_discovered": 3, 00:09:29.091 "num_base_bdevs_operational": 3, 00:09:29.091 "base_bdevs_list": [ 00:09:29.091 { 00:09:29.091 "name": "pt1", 00:09:29.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.091 "is_configured": true, 00:09:29.091 "data_offset": 2048, 00:09:29.091 "data_size": 63488 00:09:29.091 }, 00:09:29.091 { 00:09:29.091 "name": "pt2", 00:09:29.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.091 "is_configured": true, 00:09:29.091 "data_offset": 2048, 00:09:29.091 "data_size": 63488 00:09:29.091 }, 00:09:29.091 { 00:09:29.091 "name": "pt3", 00:09:29.091 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.091 "is_configured": true, 00:09:29.091 "data_offset": 2048, 00:09:29.091 "data_size": 63488 00:09:29.091 } 00:09:29.091 ] 00:09:29.091 } 00:09:29.091 } 00:09:29.091 }' 00:09:29.091 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.351 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:29.351 pt2 00:09:29.351 pt3' 00:09:29.351 01:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.351 [2024-10-09 01:29:28.203520] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.351 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' efd7b49e-c255-499a-9c2d-c4a9ad08ac03 '!=' efd7b49e-c255-499a-9c2d-c4a9ad08ac03 ']' 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.611 [2024-10-09 01:29:28.251329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.611 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.612 "name": "raid_bdev1", 00:09:29.612 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:29.612 "strip_size_kb": 0, 00:09:29.612 "state": "online", 00:09:29.612 "raid_level": "raid1", 00:09:29.612 "superblock": true, 00:09:29.612 "num_base_bdevs": 3, 00:09:29.612 "num_base_bdevs_discovered": 2, 00:09:29.612 "num_base_bdevs_operational": 2, 00:09:29.612 "base_bdevs_list": [ 00:09:29.612 { 00:09:29.612 "name": null, 00:09:29.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.612 "is_configured": false, 00:09:29.612 "data_offset": 0, 00:09:29.612 "data_size": 63488 00:09:29.612 }, 00:09:29.612 { 00:09:29.612 "name": "pt2", 00:09:29.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.612 "is_configured": true, 00:09:29.612 "data_offset": 2048, 00:09:29.612 "data_size": 63488 00:09:29.612 }, 00:09:29.612 { 00:09:29.612 "name": "pt3", 00:09:29.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.612 "is_configured": true, 00:09:29.612 "data_offset": 2048, 00:09:29.612 "data_size": 63488 00:09:29.612 } 00:09:29.612 ] 00:09:29.612 }' 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.612 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.871 [2024-10-09 01:29:28.707409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.871 [2024-10-09 01:29:28.707434] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.871 [2024-10-09 01:29:28.707489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.871 [2024-10-09 01:29:28.707550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.871 [2024-10-09 01:29:28.707564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.871 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.131 [2024-10-09 01:29:28.783417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.131 [2024-10-09 01:29:28.783530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.131 [2024-10-09 01:29:28.783564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:30.131 [2024-10-09 01:29:28.783607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.131 [2024-10-09 01:29:28.786026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.131 [2024-10-09 01:29:28.786089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.131 [2024-10-09 01:29:28.786172] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:30.131 [2024-10-09 01:29:28.786223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.131 pt2 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.131 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.131 "name": "raid_bdev1", 00:09:30.131 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:30.131 "strip_size_kb": 0, 00:09:30.131 "state": "configuring", 00:09:30.131 "raid_level": "raid1", 00:09:30.131 "superblock": true, 00:09:30.131 "num_base_bdevs": 3, 00:09:30.131 "num_base_bdevs_discovered": 1, 00:09:30.131 "num_base_bdevs_operational": 2, 00:09:30.131 "base_bdevs_list": [ 00:09:30.131 { 00:09:30.131 "name": null, 00:09:30.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.131 "is_configured": false, 00:09:30.131 "data_offset": 2048, 00:09:30.131 "data_size": 63488 00:09:30.131 }, 00:09:30.131 { 00:09:30.131 "name": "pt2", 00:09:30.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.131 "is_configured": true, 00:09:30.131 "data_offset": 2048, 00:09:30.131 "data_size": 63488 00:09:30.131 }, 00:09:30.131 { 00:09:30.131 "name": null, 00:09:30.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.131 "is_configured": false, 00:09:30.131 "data_offset": 2048, 00:09:30.132 "data_size": 63488 00:09:30.132 } 00:09:30.132 ] 00:09:30.132 }' 00:09:30.132 01:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.132 01:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.391 [2024-10-09 01:29:29.251579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:30.391 [2024-10-09 01:29:29.251682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.391 [2024-10-09 01:29:29.251720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:30.391 [2024-10-09 01:29:29.251754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.391 [2024-10-09 01:29:29.252148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.391 [2024-10-09 01:29:29.252203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:30.391 [2024-10-09 01:29:29.252288] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:30.391 [2024-10-09 01:29:29.252337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:30.391 [2024-10-09 01:29:29.252449] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:30.391 [2024-10-09 01:29:29.252489] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.391 [2024-10-09 01:29:29.252777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:30.391 [2024-10-09 01:29:29.252931] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:30.391 [2024-10-09 01:29:29.252967] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:30.391 [2024-10-09 01:29:29.253103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.391 pt3 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.391 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.651 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.651 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.651 "name": "raid_bdev1", 00:09:30.651 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:30.651 "strip_size_kb": 0, 00:09:30.651 "state": "online", 00:09:30.651 "raid_level": "raid1", 00:09:30.651 "superblock": true, 00:09:30.651 "num_base_bdevs": 3, 00:09:30.651 "num_base_bdevs_discovered": 2, 00:09:30.651 "num_base_bdevs_operational": 2, 00:09:30.651 "base_bdevs_list": [ 00:09:30.651 { 00:09:30.651 "name": null, 00:09:30.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.651 "is_configured": false, 00:09:30.651 "data_offset": 2048, 00:09:30.651 "data_size": 63488 00:09:30.651 }, 00:09:30.651 { 00:09:30.651 "name": "pt2", 00:09:30.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.651 "is_configured": true, 00:09:30.651 "data_offset": 2048, 00:09:30.651 "data_size": 63488 00:09:30.651 }, 00:09:30.651 { 00:09:30.651 "name": "pt3", 00:09:30.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.651 "is_configured": true, 00:09:30.651 "data_offset": 2048, 00:09:30.651 "data_size": 63488 00:09:30.651 } 00:09:30.651 ] 00:09:30.651 }' 00:09:30.651 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.651 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.911 [2024-10-09 01:29:29.703654] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.911 [2024-10-09 01:29:29.703722] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.911 [2024-10-09 01:29:29.703794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.911 [2024-10-09 01:29:29.703861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.911 [2024-10-09 01:29:29.703907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.911 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.912 [2024-10-09 01:29:29.779697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:30.912 [2024-10-09 01:29:29.779781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.912 [2024-10-09 01:29:29.779812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:30.912 [2024-10-09 01:29:29.779838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.912 [2024-10-09 01:29:29.782203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.912 [2024-10-09 01:29:29.782266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:30.912 [2024-10-09 01:29:29.782344] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:30.912 [2024-10-09 01:29:29.782391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.912 [2024-10-09 01:29:29.782510] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:30.912 [2024-10-09 01:29:29.782573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:30.912 [2024-10-09 01:29:29.782627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:30.912 [2024-10-09 01:29:29.782734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.912 pt1 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.912 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.171 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.171 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.171 "name": "raid_bdev1", 00:09:31.171 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:31.171 "strip_size_kb": 0, 00:09:31.171 "state": "configuring", 00:09:31.171 "raid_level": "raid1", 00:09:31.171 "superblock": true, 00:09:31.171 "num_base_bdevs": 3, 00:09:31.171 "num_base_bdevs_discovered": 1, 00:09:31.171 "num_base_bdevs_operational": 2, 00:09:31.171 "base_bdevs_list": [ 00:09:31.171 { 00:09:31.171 "name": null, 00:09:31.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.171 "is_configured": false, 00:09:31.171 "data_offset": 2048, 00:09:31.171 "data_size": 63488 00:09:31.171 }, 00:09:31.171 { 00:09:31.171 "name": "pt2", 00:09:31.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.171 "is_configured": true, 00:09:31.171 "data_offset": 2048, 00:09:31.171 "data_size": 63488 00:09:31.171 }, 00:09:31.171 { 00:09:31.171 "name": null, 00:09:31.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.171 "is_configured": false, 00:09:31.171 "data_offset": 2048, 00:09:31.171 "data_size": 63488 00:09:31.171 } 00:09:31.171 ] 00:09:31.171 }' 00:09:31.171 01:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.171 01:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.430 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:31.430 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:31.430 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.431 [2024-10-09 01:29:30.243828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:31.431 [2024-10-09 01:29:30.243918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.431 [2024-10-09 01:29:30.243968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:31.431 [2024-10-09 01:29:30.243996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.431 [2024-10-09 01:29:30.244389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.431 [2024-10-09 01:29:30.244453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:31.431 [2024-10-09 01:29:30.244545] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:31.431 [2024-10-09 01:29:30.244618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:31.431 [2024-10-09 01:29:30.244743] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:31.431 [2024-10-09 01:29:30.244778] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.431 [2024-10-09 01:29:30.245030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:09:31.431 [2024-10-09 01:29:30.245187] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:31.431 [2024-10-09 01:29:30.245230] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:31.431 [2024-10-09 01:29:30.245367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.431 pt3 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.431 "name": "raid_bdev1", 00:09:31.431 "uuid": "efd7b49e-c255-499a-9c2d-c4a9ad08ac03", 00:09:31.431 "strip_size_kb": 0, 00:09:31.431 "state": "online", 00:09:31.431 "raid_level": "raid1", 00:09:31.431 "superblock": true, 00:09:31.431 "num_base_bdevs": 3, 00:09:31.431 "num_base_bdevs_discovered": 2, 00:09:31.431 "num_base_bdevs_operational": 2, 00:09:31.431 "base_bdevs_list": [ 00:09:31.431 { 00:09:31.431 "name": null, 00:09:31.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.431 "is_configured": false, 00:09:31.431 "data_offset": 2048, 00:09:31.431 "data_size": 63488 00:09:31.431 }, 00:09:31.431 { 00:09:31.431 "name": "pt2", 00:09:31.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.431 "is_configured": true, 00:09:31.431 "data_offset": 2048, 00:09:31.431 "data_size": 63488 00:09:31.431 }, 00:09:31.431 { 00:09:31.431 "name": "pt3", 00:09:31.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.431 "is_configured": true, 00:09:31.431 "data_offset": 2048, 00:09:31.431 "data_size": 63488 00:09:31.431 } 00:09:31.431 ] 00:09:31.431 }' 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.431 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.999 [2024-10-09 01:29:30.760202] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' efd7b49e-c255-499a-9c2d-c4a9ad08ac03 '!=' efd7b49e-c255-499a-9c2d-c4a9ad08ac03 ']' 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80776 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 80776 ']' 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 80776 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80776 00:09:31.999 killing process with pid 80776 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80776' 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 80776 00:09:31.999 [2024-10-09 01:29:30.824539] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.999 [2024-10-09 01:29:30.824624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.999 [2024-10-09 01:29:30.824679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.999 [2024-10-09 01:29:30.824690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:31.999 01:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 80776 00:09:31.999 [2024-10-09 01:29:30.884295] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.569 01:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:32.569 00:09:32.569 real 0m6.715s 00:09:32.569 user 0m11.049s 00:09:32.569 sys 0m1.456s 00:09:32.569 ************************************ 00:09:32.569 END TEST raid_superblock_test 00:09:32.569 ************************************ 00:09:32.569 01:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.569 01:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 01:29:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:32.570 01:29:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:32.570 01:29:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.570 01:29:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 ************************************ 00:09:32.570 START TEST raid_read_error_test 00:09:32.570 ************************************ 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.o7q0YaG1dH 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81211 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81211 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81211 ']' 00:09:32.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.570 01:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 [2024-10-09 01:29:31.435300] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:32.570 [2024-10-09 01:29:31.435496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81211 ] 00:09:32.830 [2024-10-09 01:29:31.571636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:32.830 [2024-10-09 01:29:31.599967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.830 [2024-10-09 01:29:31.667919] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.089 [2024-10-09 01:29:31.743214] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.089 [2024-10-09 01:29:31.743350] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.658 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.658 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:33.658 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.658 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 BaseBdev1_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 true 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 [2024-10-09 01:29:32.297615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:33.659 [2024-10-09 01:29:32.297747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.659 [2024-10-09 01:29:32.297784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:33.659 [2024-10-09 01:29:32.297826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.659 [2024-10-09 01:29:32.300233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.659 [2024-10-09 01:29:32.300306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:33.659 BaseBdev1 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 BaseBdev2_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 true 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 [2024-10-09 01:29:32.360263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:33.659 [2024-10-09 01:29:32.360402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.659 [2024-10-09 01:29:32.360448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:33.659 [2024-10-09 01:29:32.360499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.659 [2024-10-09 01:29:32.363339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.659 [2024-10-09 01:29:32.363432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:33.659 BaseBdev2 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 BaseBdev3_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 true 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 [2024-10-09 01:29:32.406736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:33.659 [2024-10-09 01:29:32.406785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.659 [2024-10-09 01:29:32.406817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:33.659 [2024-10-09 01:29:32.406828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.659 [2024-10-09 01:29:32.409213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.659 [2024-10-09 01:29:32.409252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:33.659 BaseBdev3 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 [2024-10-09 01:29:32.418820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.659 [2024-10-09 01:29:32.420914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.659 [2024-10-09 01:29:32.421035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.659 [2024-10-09 01:29:32.421250] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:33.659 [2024-10-09 01:29:32.421308] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.659 [2024-10-09 01:29:32.421569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:33.659 [2024-10-09 01:29:32.421757] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:33.659 [2024-10-09 01:29:32.421804] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:33.659 [2024-10-09 01:29:32.421969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.659 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.659 "name": "raid_bdev1", 00:09:33.659 "uuid": "48ab1749-85b1-4423-be0e-2812108a481c", 00:09:33.659 "strip_size_kb": 0, 00:09:33.659 "state": "online", 00:09:33.659 "raid_level": "raid1", 00:09:33.659 "superblock": true, 00:09:33.659 "num_base_bdevs": 3, 00:09:33.659 "num_base_bdevs_discovered": 3, 00:09:33.659 "num_base_bdevs_operational": 3, 00:09:33.659 "base_bdevs_list": [ 00:09:33.659 { 00:09:33.659 "name": "BaseBdev1", 00:09:33.659 "uuid": "94c43786-3540-52a7-86cf-a2633654c00c", 00:09:33.659 "is_configured": true, 00:09:33.659 "data_offset": 2048, 00:09:33.659 "data_size": 63488 00:09:33.659 }, 00:09:33.659 { 00:09:33.659 "name": "BaseBdev2", 00:09:33.659 "uuid": "b54aebbc-82a3-5d74-8176-c1f3af310160", 00:09:33.659 "is_configured": true, 00:09:33.659 "data_offset": 2048, 00:09:33.659 "data_size": 63488 00:09:33.659 }, 00:09:33.659 { 00:09:33.659 "name": "BaseBdev3", 00:09:33.659 "uuid": "f1803084-92f0-5471-9ee7-b7adc7e6bf55", 00:09:33.659 "is_configured": true, 00:09:33.659 "data_offset": 2048, 00:09:33.659 "data_size": 63488 00:09:33.660 } 00:09:33.660 ] 00:09:33.660 }' 00:09:33.660 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.660 01:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.229 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:34.229 01:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:34.229 [2024-10-09 01:29:32.931360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.169 "name": "raid_bdev1", 00:09:35.169 "uuid": "48ab1749-85b1-4423-be0e-2812108a481c", 00:09:35.169 "strip_size_kb": 0, 00:09:35.169 "state": "online", 00:09:35.169 "raid_level": "raid1", 00:09:35.169 "superblock": true, 00:09:35.169 "num_base_bdevs": 3, 00:09:35.169 "num_base_bdevs_discovered": 3, 00:09:35.169 "num_base_bdevs_operational": 3, 00:09:35.169 "base_bdevs_list": [ 00:09:35.169 { 00:09:35.169 "name": "BaseBdev1", 00:09:35.169 "uuid": "94c43786-3540-52a7-86cf-a2633654c00c", 00:09:35.169 "is_configured": true, 00:09:35.169 "data_offset": 2048, 00:09:35.169 "data_size": 63488 00:09:35.169 }, 00:09:35.169 { 00:09:35.169 "name": "BaseBdev2", 00:09:35.169 "uuid": "b54aebbc-82a3-5d74-8176-c1f3af310160", 00:09:35.169 "is_configured": true, 00:09:35.169 "data_offset": 2048, 00:09:35.169 "data_size": 63488 00:09:35.169 }, 00:09:35.169 { 00:09:35.169 "name": "BaseBdev3", 00:09:35.169 "uuid": "f1803084-92f0-5471-9ee7-b7adc7e6bf55", 00:09:35.169 "is_configured": true, 00:09:35.169 "data_offset": 2048, 00:09:35.169 "data_size": 63488 00:09:35.169 } 00:09:35.169 ] 00:09:35.169 }' 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.169 01:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.429 [2024-10-09 01:29:34.282611] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.429 [2024-10-09 01:29:34.282693] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.429 [2024-10-09 01:29:34.285214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.429 [2024-10-09 01:29:34.285310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.429 [2024-10-09 01:29:34.285445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.429 [2024-10-09 01:29:34.285493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:35.429 { 00:09:35.429 "results": [ 00:09:35.429 { 00:09:35.429 "job": "raid_bdev1", 00:09:35.429 "core_mask": "0x1", 00:09:35.429 "workload": "randrw", 00:09:35.429 "percentage": 50, 00:09:35.429 "status": "finished", 00:09:35.429 "queue_depth": 1, 00:09:35.429 "io_size": 131072, 00:09:35.429 "runtime": 1.349262, 00:09:35.429 "iops": 11475.903123337053, 00:09:35.429 "mibps": 1434.4878904171317, 00:09:35.429 "io_failed": 0, 00:09:35.429 "io_timeout": 0, 00:09:35.429 "avg_latency_us": 84.73581555680593, 00:09:35.429 "min_latency_us": 21.97855835439728, 00:09:35.429 "max_latency_us": 1428.0484616055087 00:09:35.429 } 00:09:35.429 ], 00:09:35.429 "core_count": 1 00:09:35.429 } 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81211 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81211 ']' 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81211 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.429 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81211 00:09:35.689 killing process with pid 81211 00:09:35.689 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.689 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.689 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81211' 00:09:35.689 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81211 00:09:35.689 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81211 00:09:35.689 [2024-10-09 01:29:34.329330] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.689 [2024-10-09 01:29:34.377149] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.o7q0YaG1dH 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:35.950 ************************************ 00:09:35.950 END TEST raid_read_error_test 00:09:35.950 ************************************ 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:35.950 00:09:35.950 real 0m3.428s 00:09:35.950 user 0m4.155s 00:09:35.950 sys 0m0.632s 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.950 01:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.950 01:29:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:35.950 01:29:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:35.950 01:29:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.950 01:29:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.950 ************************************ 00:09:35.950 START TEST raid_write_error_test 00:09:35.950 ************************************ 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:35.950 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uFHhrMJQey 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81340 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81340 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 81340 ']' 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:36.210 01:29:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.210 [2024-10-09 01:29:34.929866] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:36.210 [2024-10-09 01:29:34.930063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81340 ] 00:09:36.210 [2024-10-09 01:29:35.062076] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:36.210 [2024-10-09 01:29:35.090666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.470 [2024-10-09 01:29:35.159190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.470 [2024-10-09 01:29:35.235194] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.470 [2024-10-09 01:29:35.235311] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.039 BaseBdev1_malloc 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.039 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.039 true 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.040 [2024-10-09 01:29:35.786149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:37.040 [2024-10-09 01:29:35.786255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.040 [2024-10-09 01:29:35.786293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:37.040 [2024-10-09 01:29:35.786327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.040 [2024-10-09 01:29:35.788782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.040 [2024-10-09 01:29:35.788851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:37.040 BaseBdev1 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.040 BaseBdev2_malloc 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.040 true 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.040 [2024-10-09 01:29:35.850030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:37.040 [2024-10-09 01:29:35.850142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.040 [2024-10-09 01:29:35.850186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:37.040 [2024-10-09 01:29:35.850230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.040 [2024-10-09 01:29:35.853124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.040 [2024-10-09 01:29:35.853172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:37.040 BaseBdev2 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.040 BaseBdev3_malloc 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.040 true 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.040 [2024-10-09 01:29:35.896534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:37.040 [2024-10-09 01:29:35.896619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.040 [2024-10-09 01:29:35.896653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:37.040 [2024-10-09 01:29:35.896682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.040 [2024-10-09 01:29:35.898995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.040 [2024-10-09 01:29:35.899061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:37.040 BaseBdev3 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.040 [2024-10-09 01:29:35.908614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.040 [2024-10-09 01:29:35.910696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.040 [2024-10-09 01:29:35.910795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.040 [2024-10-09 01:29:35.911013] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:37.040 [2024-10-09 01:29:35.911056] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.040 [2024-10-09 01:29:35.911309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:37.040 [2024-10-09 01:29:35.911490] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:37.040 [2024-10-09 01:29:35.911544] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:37.040 [2024-10-09 01:29:35.911709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.040 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.299 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.299 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.299 "name": "raid_bdev1", 00:09:37.299 "uuid": "31f4080e-5b5b-4e53-b383-0909d9e72919", 00:09:37.299 "strip_size_kb": 0, 00:09:37.299 "state": "online", 00:09:37.299 "raid_level": "raid1", 00:09:37.299 "superblock": true, 00:09:37.299 "num_base_bdevs": 3, 00:09:37.299 "num_base_bdevs_discovered": 3, 00:09:37.299 "num_base_bdevs_operational": 3, 00:09:37.299 "base_bdevs_list": [ 00:09:37.299 { 00:09:37.299 "name": "BaseBdev1", 00:09:37.299 "uuid": "65891f53-9a80-5ed6-a5d4-922d7a679cb8", 00:09:37.299 "is_configured": true, 00:09:37.299 "data_offset": 2048, 00:09:37.299 "data_size": 63488 00:09:37.299 }, 00:09:37.299 { 00:09:37.299 "name": "BaseBdev2", 00:09:37.299 "uuid": "0bfa7717-df5c-55d5-b238-7df2ad0abf85", 00:09:37.299 "is_configured": true, 00:09:37.299 "data_offset": 2048, 00:09:37.299 "data_size": 63488 00:09:37.299 }, 00:09:37.299 { 00:09:37.299 "name": "BaseBdev3", 00:09:37.299 "uuid": "5555908a-63e2-542f-a798-a2ad9bb4f435", 00:09:37.299 "is_configured": true, 00:09:37.299 "data_offset": 2048, 00:09:37.299 "data_size": 63488 00:09:37.299 } 00:09:37.299 ] 00:09:37.299 }' 00:09:37.299 01:29:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.299 01:29:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.558 01:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:37.558 01:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:37.558 [2024-10-09 01:29:36.397144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.498 [2024-10-09 01:29:37.316723] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:38.498 [2024-10-09 01:29:37.316784] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.498 [2024-10-09 01:29:37.317031] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.498 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.499 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.499 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.499 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.499 "name": "raid_bdev1", 00:09:38.499 "uuid": "31f4080e-5b5b-4e53-b383-0909d9e72919", 00:09:38.499 "strip_size_kb": 0, 00:09:38.499 "state": "online", 00:09:38.499 "raid_level": "raid1", 00:09:38.499 "superblock": true, 00:09:38.499 "num_base_bdevs": 3, 00:09:38.499 "num_base_bdevs_discovered": 2, 00:09:38.499 "num_base_bdevs_operational": 2, 00:09:38.499 "base_bdevs_list": [ 00:09:38.499 { 00:09:38.499 "name": null, 00:09:38.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.499 "is_configured": false, 00:09:38.499 "data_offset": 0, 00:09:38.499 "data_size": 63488 00:09:38.499 }, 00:09:38.499 { 00:09:38.499 "name": "BaseBdev2", 00:09:38.499 "uuid": "0bfa7717-df5c-55d5-b238-7df2ad0abf85", 00:09:38.499 "is_configured": true, 00:09:38.499 "data_offset": 2048, 00:09:38.499 "data_size": 63488 00:09:38.499 }, 00:09:38.499 { 00:09:38.499 "name": "BaseBdev3", 00:09:38.499 "uuid": "5555908a-63e2-542f-a798-a2ad9bb4f435", 00:09:38.499 "is_configured": true, 00:09:38.499 "data_offset": 2048, 00:09:38.499 "data_size": 63488 00:09:38.499 } 00:09:38.499 ] 00:09:38.499 }' 00:09:38.499 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.499 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.124 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.124 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.124 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.124 [2024-10-09 01:29:37.789199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.124 [2024-10-09 01:29:37.789297] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.124 [2024-10-09 01:29:37.791741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.124 [2024-10-09 01:29:37.791832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.124 [2024-10-09 01:29:37.791934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.124 [2024-10-09 01:29:37.791986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:39.124 { 00:09:39.124 "results": [ 00:09:39.124 { 00:09:39.124 "job": "raid_bdev1", 00:09:39.124 "core_mask": "0x1", 00:09:39.124 "workload": "randrw", 00:09:39.124 "percentage": 50, 00:09:39.124 "status": "finished", 00:09:39.124 "queue_depth": 1, 00:09:39.124 "io_size": 131072, 00:09:39.124 "runtime": 1.389963, 00:09:39.124 "iops": 13104.665375984829, 00:09:39.124 "mibps": 1638.0831719981036, 00:09:39.124 "io_failed": 0, 00:09:39.124 "io_timeout": 0, 00:09:39.124 "avg_latency_us": 73.8837546693009, 00:09:39.124 "min_latency_us": 21.309160638019698, 00:09:39.124 "max_latency_us": 1392.3472500653709 00:09:39.124 } 00:09:39.124 ], 00:09:39.125 "core_count": 1 00:09:39.125 } 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81340 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 81340 ']' 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 81340 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81340 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81340' 00:09:39.125 killing process with pid 81340 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 81340 00:09:39.125 [2024-10-09 01:29:37.839432] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.125 01:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 81340 00:09:39.125 [2024-10-09 01:29:37.885928] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uFHhrMJQey 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:39.385 ************************************ 00:09:39.385 END TEST raid_write_error_test 00:09:39.385 ************************************ 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:39.385 00:09:39.385 real 0m3.435s 00:09:39.385 user 0m4.152s 00:09:39.385 sys 0m0.645s 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.385 01:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.645 01:29:38 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:39.645 01:29:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:39.645 01:29:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:39.645 01:29:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:39.645 01:29:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.645 01:29:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.645 ************************************ 00:09:39.645 START TEST raid_state_function_test 00:09:39.645 ************************************ 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81467 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81467' 00:09:39.645 Process raid pid: 81467 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81467 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 81467 ']' 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.645 01:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.646 01:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.646 01:29:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.646 [2024-10-09 01:29:38.434509] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:39.646 [2024-10-09 01:29:38.434712] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.905 [2024-10-09 01:29:38.567848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:39.905 [2024-10-09 01:29:38.597068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.905 [2024-10-09 01:29:38.665147] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.905 [2024-10-09 01:29:38.741209] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.905 [2024-10-09 01:29:38.741346] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.475 [2024-10-09 01:29:39.269267] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.475 [2024-10-09 01:29:39.269327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.475 [2024-10-09 01:29:39.269348] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.475 [2024-10-09 01:29:39.269357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.475 [2024-10-09 01:29:39.269369] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.475 [2024-10-09 01:29:39.269377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.475 [2024-10-09 01:29:39.269385] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:40.475 [2024-10-09 01:29:39.269392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.475 "name": "Existed_Raid", 00:09:40.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.475 "strip_size_kb": 64, 00:09:40.475 "state": "configuring", 00:09:40.475 "raid_level": "raid0", 00:09:40.475 "superblock": false, 00:09:40.475 "num_base_bdevs": 4, 00:09:40.475 "num_base_bdevs_discovered": 0, 00:09:40.475 "num_base_bdevs_operational": 4, 00:09:40.475 "base_bdevs_list": [ 00:09:40.475 { 00:09:40.475 "name": "BaseBdev1", 00:09:40.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.475 "is_configured": false, 00:09:40.475 "data_offset": 0, 00:09:40.475 "data_size": 0 00:09:40.475 }, 00:09:40.475 { 00:09:40.475 "name": "BaseBdev2", 00:09:40.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.475 "is_configured": false, 00:09:40.475 "data_offset": 0, 00:09:40.475 "data_size": 0 00:09:40.475 }, 00:09:40.475 { 00:09:40.475 "name": "BaseBdev3", 00:09:40.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.475 "is_configured": false, 00:09:40.475 "data_offset": 0, 00:09:40.475 "data_size": 0 00:09:40.475 }, 00:09:40.475 { 00:09:40.475 "name": "BaseBdev4", 00:09:40.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.475 "is_configured": false, 00:09:40.475 "data_offset": 0, 00:09:40.475 "data_size": 0 00:09:40.475 } 00:09:40.475 ] 00:09:40.475 }' 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.475 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-10-09 01:29:39.665292] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.044 [2024-10-09 01:29:39.665369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-10-09 01:29:39.677296] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.044 [2024-10-09 01:29:39.677365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.044 [2024-10-09 01:29:39.677393] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.044 [2024-10-09 01:29:39.677413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.044 [2024-10-09 01:29:39.677432] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.044 [2024-10-09 01:29:39.677450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.044 [2024-10-09 01:29:39.677469] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:41.044 [2024-10-09 01:29:39.677487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 [2024-10-09 01:29:39.704008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.044 BaseBdev1 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.044 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.045 [ 00:09:41.045 { 00:09:41.045 "name": "BaseBdev1", 00:09:41.045 "aliases": [ 00:09:41.045 "74654cd6-e315-4f2d-97dc-001e71727dfa" 00:09:41.045 ], 00:09:41.045 "product_name": "Malloc disk", 00:09:41.045 "block_size": 512, 00:09:41.045 "num_blocks": 65536, 00:09:41.045 "uuid": "74654cd6-e315-4f2d-97dc-001e71727dfa", 00:09:41.045 "assigned_rate_limits": { 00:09:41.045 "rw_ios_per_sec": 0, 00:09:41.045 "rw_mbytes_per_sec": 0, 00:09:41.045 "r_mbytes_per_sec": 0, 00:09:41.045 "w_mbytes_per_sec": 0 00:09:41.045 }, 00:09:41.045 "claimed": true, 00:09:41.045 "claim_type": "exclusive_write", 00:09:41.045 "zoned": false, 00:09:41.045 "supported_io_types": { 00:09:41.045 "read": true, 00:09:41.045 "write": true, 00:09:41.045 "unmap": true, 00:09:41.045 "flush": true, 00:09:41.045 "reset": true, 00:09:41.045 "nvme_admin": false, 00:09:41.045 "nvme_io": false, 00:09:41.045 "nvme_io_md": false, 00:09:41.045 "write_zeroes": true, 00:09:41.045 "zcopy": true, 00:09:41.045 "get_zone_info": false, 00:09:41.045 "zone_management": false, 00:09:41.045 "zone_append": false, 00:09:41.045 "compare": false, 00:09:41.045 "compare_and_write": false, 00:09:41.045 "abort": true, 00:09:41.045 "seek_hole": false, 00:09:41.045 "seek_data": false, 00:09:41.045 "copy": true, 00:09:41.045 "nvme_iov_md": false 00:09:41.045 }, 00:09:41.045 "memory_domains": [ 00:09:41.045 { 00:09:41.045 "dma_device_id": "system", 00:09:41.045 "dma_device_type": 1 00:09:41.045 }, 00:09:41.045 { 00:09:41.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.045 "dma_device_type": 2 00:09:41.045 } 00:09:41.045 ], 00:09:41.045 "driver_specific": {} 00:09:41.045 } 00:09:41.045 ] 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.045 "name": "Existed_Raid", 00:09:41.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.045 "strip_size_kb": 64, 00:09:41.045 "state": "configuring", 00:09:41.045 "raid_level": "raid0", 00:09:41.045 "superblock": false, 00:09:41.045 "num_base_bdevs": 4, 00:09:41.045 "num_base_bdevs_discovered": 1, 00:09:41.045 "num_base_bdevs_operational": 4, 00:09:41.045 "base_bdevs_list": [ 00:09:41.045 { 00:09:41.045 "name": "BaseBdev1", 00:09:41.045 "uuid": "74654cd6-e315-4f2d-97dc-001e71727dfa", 00:09:41.045 "is_configured": true, 00:09:41.045 "data_offset": 0, 00:09:41.045 "data_size": 65536 00:09:41.045 }, 00:09:41.045 { 00:09:41.045 "name": "BaseBdev2", 00:09:41.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.045 "is_configured": false, 00:09:41.045 "data_offset": 0, 00:09:41.045 "data_size": 0 00:09:41.045 }, 00:09:41.045 { 00:09:41.045 "name": "BaseBdev3", 00:09:41.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.045 "is_configured": false, 00:09:41.045 "data_offset": 0, 00:09:41.045 "data_size": 0 00:09:41.045 }, 00:09:41.045 { 00:09:41.045 "name": "BaseBdev4", 00:09:41.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.045 "is_configured": false, 00:09:41.045 "data_offset": 0, 00:09:41.045 "data_size": 0 00:09:41.045 } 00:09:41.045 ] 00:09:41.045 }' 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.045 01:29:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.304 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.564 [2024-10-09 01:29:40.200164] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.564 [2024-10-09 01:29:40.200255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.564 [2024-10-09 01:29:40.212181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.564 [2024-10-09 01:29:40.214357] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.564 [2024-10-09 01:29:40.214429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.564 [2024-10-09 01:29:40.214444] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.564 [2024-10-09 01:29:40.214451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.564 [2024-10-09 01:29:40.214458] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:41.564 [2024-10-09 01:29:40.214464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.564 "name": "Existed_Raid", 00:09:41.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.564 "strip_size_kb": 64, 00:09:41.564 "state": "configuring", 00:09:41.564 "raid_level": "raid0", 00:09:41.564 "superblock": false, 00:09:41.564 "num_base_bdevs": 4, 00:09:41.564 "num_base_bdevs_discovered": 1, 00:09:41.564 "num_base_bdevs_operational": 4, 00:09:41.564 "base_bdevs_list": [ 00:09:41.564 { 00:09:41.564 "name": "BaseBdev1", 00:09:41.564 "uuid": "74654cd6-e315-4f2d-97dc-001e71727dfa", 00:09:41.564 "is_configured": true, 00:09:41.564 "data_offset": 0, 00:09:41.564 "data_size": 65536 00:09:41.564 }, 00:09:41.564 { 00:09:41.564 "name": "BaseBdev2", 00:09:41.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.564 "is_configured": false, 00:09:41.564 "data_offset": 0, 00:09:41.564 "data_size": 0 00:09:41.564 }, 00:09:41.564 { 00:09:41.564 "name": "BaseBdev3", 00:09:41.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.564 "is_configured": false, 00:09:41.564 "data_offset": 0, 00:09:41.564 "data_size": 0 00:09:41.564 }, 00:09:41.564 { 00:09:41.564 "name": "BaseBdev4", 00:09:41.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.564 "is_configured": false, 00:09:41.564 "data_offset": 0, 00:09:41.564 "data_size": 0 00:09:41.564 } 00:09:41.564 ] 00:09:41.564 }' 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.564 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.824 [2024-10-09 01:29:40.670561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.824 BaseBdev2 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.824 [ 00:09:41.824 { 00:09:41.824 "name": "BaseBdev2", 00:09:41.824 "aliases": [ 00:09:41.824 "04df8e45-982a-48b5-b1a8-5d37363db0b0" 00:09:41.824 ], 00:09:41.824 "product_name": "Malloc disk", 00:09:41.824 "block_size": 512, 00:09:41.824 "num_blocks": 65536, 00:09:41.824 "uuid": "04df8e45-982a-48b5-b1a8-5d37363db0b0", 00:09:41.824 "assigned_rate_limits": { 00:09:41.824 "rw_ios_per_sec": 0, 00:09:41.824 "rw_mbytes_per_sec": 0, 00:09:41.824 "r_mbytes_per_sec": 0, 00:09:41.824 "w_mbytes_per_sec": 0 00:09:41.824 }, 00:09:41.824 "claimed": true, 00:09:41.824 "claim_type": "exclusive_write", 00:09:41.824 "zoned": false, 00:09:41.824 "supported_io_types": { 00:09:41.824 "read": true, 00:09:41.824 "write": true, 00:09:41.824 "unmap": true, 00:09:41.824 "flush": true, 00:09:41.824 "reset": true, 00:09:41.824 "nvme_admin": false, 00:09:41.824 "nvme_io": false, 00:09:41.824 "nvme_io_md": false, 00:09:41.824 "write_zeroes": true, 00:09:41.824 "zcopy": true, 00:09:41.824 "get_zone_info": false, 00:09:41.824 "zone_management": false, 00:09:41.824 "zone_append": false, 00:09:41.824 "compare": false, 00:09:41.824 "compare_and_write": false, 00:09:41.824 "abort": true, 00:09:41.824 "seek_hole": false, 00:09:41.824 "seek_data": false, 00:09:41.824 "copy": true, 00:09:41.824 "nvme_iov_md": false 00:09:41.824 }, 00:09:41.824 "memory_domains": [ 00:09:41.824 { 00:09:41.824 "dma_device_id": "system", 00:09:41.824 "dma_device_type": 1 00:09:41.824 }, 00:09:41.824 { 00:09:41.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.824 "dma_device_type": 2 00:09:41.824 } 00:09:41.824 ], 00:09:41.824 "driver_specific": {} 00:09:41.824 } 00:09:41.824 ] 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.824 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.083 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.083 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.083 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.083 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.083 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.083 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.083 "name": "Existed_Raid", 00:09:42.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.083 "strip_size_kb": 64, 00:09:42.083 "state": "configuring", 00:09:42.083 "raid_level": "raid0", 00:09:42.083 "superblock": false, 00:09:42.083 "num_base_bdevs": 4, 00:09:42.083 "num_base_bdevs_discovered": 2, 00:09:42.083 "num_base_bdevs_operational": 4, 00:09:42.083 "base_bdevs_list": [ 00:09:42.083 { 00:09:42.083 "name": "BaseBdev1", 00:09:42.083 "uuid": "74654cd6-e315-4f2d-97dc-001e71727dfa", 00:09:42.083 "is_configured": true, 00:09:42.083 "data_offset": 0, 00:09:42.083 "data_size": 65536 00:09:42.083 }, 00:09:42.083 { 00:09:42.083 "name": "BaseBdev2", 00:09:42.083 "uuid": "04df8e45-982a-48b5-b1a8-5d37363db0b0", 00:09:42.083 "is_configured": true, 00:09:42.083 "data_offset": 0, 00:09:42.083 "data_size": 65536 00:09:42.083 }, 00:09:42.083 { 00:09:42.083 "name": "BaseBdev3", 00:09:42.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.083 "is_configured": false, 00:09:42.083 "data_offset": 0, 00:09:42.083 "data_size": 0 00:09:42.083 }, 00:09:42.083 { 00:09:42.083 "name": "BaseBdev4", 00:09:42.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.083 "is_configured": false, 00:09:42.083 "data_offset": 0, 00:09:42.083 "data_size": 0 00:09:42.083 } 00:09:42.083 ] 00:09:42.083 }' 00:09:42.083 01:29:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.083 01:29:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.343 [2024-10-09 01:29:41.147340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.343 BaseBdev3 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.343 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.344 [ 00:09:42.344 { 00:09:42.344 "name": "BaseBdev3", 00:09:42.344 "aliases": [ 00:09:42.344 "ad5371f7-e82e-44ae-a7f6-980057067093" 00:09:42.344 ], 00:09:42.344 "product_name": "Malloc disk", 00:09:42.344 "block_size": 512, 00:09:42.344 "num_blocks": 65536, 00:09:42.344 "uuid": "ad5371f7-e82e-44ae-a7f6-980057067093", 00:09:42.344 "assigned_rate_limits": { 00:09:42.344 "rw_ios_per_sec": 0, 00:09:42.344 "rw_mbytes_per_sec": 0, 00:09:42.344 "r_mbytes_per_sec": 0, 00:09:42.344 "w_mbytes_per_sec": 0 00:09:42.344 }, 00:09:42.344 "claimed": true, 00:09:42.344 "claim_type": "exclusive_write", 00:09:42.344 "zoned": false, 00:09:42.344 "supported_io_types": { 00:09:42.344 "read": true, 00:09:42.344 "write": true, 00:09:42.344 "unmap": true, 00:09:42.344 "flush": true, 00:09:42.344 "reset": true, 00:09:42.344 "nvme_admin": false, 00:09:42.344 "nvme_io": false, 00:09:42.344 "nvme_io_md": false, 00:09:42.344 "write_zeroes": true, 00:09:42.344 "zcopy": true, 00:09:42.344 "get_zone_info": false, 00:09:42.344 "zone_management": false, 00:09:42.344 "zone_append": false, 00:09:42.344 "compare": false, 00:09:42.344 "compare_and_write": false, 00:09:42.344 "abort": true, 00:09:42.344 "seek_hole": false, 00:09:42.344 "seek_data": false, 00:09:42.344 "copy": true, 00:09:42.344 "nvme_iov_md": false 00:09:42.344 }, 00:09:42.344 "memory_domains": [ 00:09:42.344 { 00:09:42.344 "dma_device_id": "system", 00:09:42.344 "dma_device_type": 1 00:09:42.344 }, 00:09:42.344 { 00:09:42.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.344 "dma_device_type": 2 00:09:42.344 } 00:09:42.344 ], 00:09:42.344 "driver_specific": {} 00:09:42.344 } 00:09:42.344 ] 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.344 "name": "Existed_Raid", 00:09:42.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.344 "strip_size_kb": 64, 00:09:42.344 "state": "configuring", 00:09:42.344 "raid_level": "raid0", 00:09:42.344 "superblock": false, 00:09:42.344 "num_base_bdevs": 4, 00:09:42.344 "num_base_bdevs_discovered": 3, 00:09:42.344 "num_base_bdevs_operational": 4, 00:09:42.344 "base_bdevs_list": [ 00:09:42.344 { 00:09:42.344 "name": "BaseBdev1", 00:09:42.344 "uuid": "74654cd6-e315-4f2d-97dc-001e71727dfa", 00:09:42.344 "is_configured": true, 00:09:42.344 "data_offset": 0, 00:09:42.344 "data_size": 65536 00:09:42.344 }, 00:09:42.344 { 00:09:42.344 "name": "BaseBdev2", 00:09:42.344 "uuid": "04df8e45-982a-48b5-b1a8-5d37363db0b0", 00:09:42.344 "is_configured": true, 00:09:42.344 "data_offset": 0, 00:09:42.344 "data_size": 65536 00:09:42.344 }, 00:09:42.344 { 00:09:42.344 "name": "BaseBdev3", 00:09:42.344 "uuid": "ad5371f7-e82e-44ae-a7f6-980057067093", 00:09:42.344 "is_configured": true, 00:09:42.344 "data_offset": 0, 00:09:42.344 "data_size": 65536 00:09:42.344 }, 00:09:42.344 { 00:09:42.344 "name": "BaseBdev4", 00:09:42.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.344 "is_configured": false, 00:09:42.344 "data_offset": 0, 00:09:42.344 "data_size": 0 00:09:42.344 } 00:09:42.344 ] 00:09:42.344 }' 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.344 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.913 [2024-10-09 01:29:41.616161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:42.913 [2024-10-09 01:29:41.616203] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:42.913 [2024-10-09 01:29:41.616216] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:42.913 [2024-10-09 01:29:41.616617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:42.913 [2024-10-09 01:29:41.616784] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:42.913 [2024-10-09 01:29:41.616795] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:42.913 BaseBdev4 00:09:42.913 [2024-10-09 01:29:41.617040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.913 [ 00:09:42.913 { 00:09:42.913 "name": "BaseBdev4", 00:09:42.913 "aliases": [ 00:09:42.913 "c07d845e-8391-4256-bb98-2cc9043db2ff" 00:09:42.913 ], 00:09:42.913 "product_name": "Malloc disk", 00:09:42.913 "block_size": 512, 00:09:42.913 "num_blocks": 65536, 00:09:42.913 "uuid": "c07d845e-8391-4256-bb98-2cc9043db2ff", 00:09:42.913 "assigned_rate_limits": { 00:09:42.913 "rw_ios_per_sec": 0, 00:09:42.913 "rw_mbytes_per_sec": 0, 00:09:42.913 "r_mbytes_per_sec": 0, 00:09:42.913 "w_mbytes_per_sec": 0 00:09:42.913 }, 00:09:42.913 "claimed": true, 00:09:42.913 "claim_type": "exclusive_write", 00:09:42.913 "zoned": false, 00:09:42.913 "supported_io_types": { 00:09:42.913 "read": true, 00:09:42.913 "write": true, 00:09:42.913 "unmap": true, 00:09:42.913 "flush": true, 00:09:42.913 "reset": true, 00:09:42.913 "nvme_admin": false, 00:09:42.913 "nvme_io": false, 00:09:42.913 "nvme_io_md": false, 00:09:42.913 "write_zeroes": true, 00:09:42.913 "zcopy": true, 00:09:42.913 "get_zone_info": false, 00:09:42.913 "zone_management": false, 00:09:42.913 "zone_append": false, 00:09:42.913 "compare": false, 00:09:42.913 "compare_and_write": false, 00:09:42.913 "abort": true, 00:09:42.913 "seek_hole": false, 00:09:42.913 "seek_data": false, 00:09:42.913 "copy": true, 00:09:42.913 "nvme_iov_md": false 00:09:42.913 }, 00:09:42.913 "memory_domains": [ 00:09:42.913 { 00:09:42.913 "dma_device_id": "system", 00:09:42.913 "dma_device_type": 1 00:09:42.913 }, 00:09:42.913 { 00:09:42.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.913 "dma_device_type": 2 00:09:42.913 } 00:09:42.913 ], 00:09:42.913 "driver_specific": {} 00:09:42.913 } 00:09:42.913 ] 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.913 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.913 "name": "Existed_Raid", 00:09:42.913 "uuid": "26a3e44d-514d-41bc-9563-9b2273476adf", 00:09:42.913 "strip_size_kb": 64, 00:09:42.913 "state": "online", 00:09:42.913 "raid_level": "raid0", 00:09:42.913 "superblock": false, 00:09:42.913 "num_base_bdevs": 4, 00:09:42.913 "num_base_bdevs_discovered": 4, 00:09:42.913 "num_base_bdevs_operational": 4, 00:09:42.913 "base_bdevs_list": [ 00:09:42.913 { 00:09:42.913 "name": "BaseBdev1", 00:09:42.913 "uuid": "74654cd6-e315-4f2d-97dc-001e71727dfa", 00:09:42.913 "is_configured": true, 00:09:42.913 "data_offset": 0, 00:09:42.913 "data_size": 65536 00:09:42.913 }, 00:09:42.913 { 00:09:42.913 "name": "BaseBdev2", 00:09:42.913 "uuid": "04df8e45-982a-48b5-b1a8-5d37363db0b0", 00:09:42.913 "is_configured": true, 00:09:42.913 "data_offset": 0, 00:09:42.913 "data_size": 65536 00:09:42.913 }, 00:09:42.913 { 00:09:42.913 "name": "BaseBdev3", 00:09:42.913 "uuid": "ad5371f7-e82e-44ae-a7f6-980057067093", 00:09:42.913 "is_configured": true, 00:09:42.914 "data_offset": 0, 00:09:42.914 "data_size": 65536 00:09:42.914 }, 00:09:42.914 { 00:09:42.914 "name": "BaseBdev4", 00:09:42.914 "uuid": "c07d845e-8391-4256-bb98-2cc9043db2ff", 00:09:42.914 "is_configured": true, 00:09:42.914 "data_offset": 0, 00:09:42.914 "data_size": 65536 00:09:42.914 } 00:09:42.914 ] 00:09:42.914 }' 00:09:42.914 01:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.914 01:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.483 [2024-10-09 01:29:42.124678] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.483 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.483 "name": "Existed_Raid", 00:09:43.483 "aliases": [ 00:09:43.483 "26a3e44d-514d-41bc-9563-9b2273476adf" 00:09:43.483 ], 00:09:43.483 "product_name": "Raid Volume", 00:09:43.483 "block_size": 512, 00:09:43.483 "num_blocks": 262144, 00:09:43.483 "uuid": "26a3e44d-514d-41bc-9563-9b2273476adf", 00:09:43.483 "assigned_rate_limits": { 00:09:43.483 "rw_ios_per_sec": 0, 00:09:43.483 "rw_mbytes_per_sec": 0, 00:09:43.483 "r_mbytes_per_sec": 0, 00:09:43.483 "w_mbytes_per_sec": 0 00:09:43.483 }, 00:09:43.483 "claimed": false, 00:09:43.483 "zoned": false, 00:09:43.483 "supported_io_types": { 00:09:43.483 "read": true, 00:09:43.483 "write": true, 00:09:43.483 "unmap": true, 00:09:43.483 "flush": true, 00:09:43.483 "reset": true, 00:09:43.483 "nvme_admin": false, 00:09:43.483 "nvme_io": false, 00:09:43.483 "nvme_io_md": false, 00:09:43.483 "write_zeroes": true, 00:09:43.483 "zcopy": false, 00:09:43.483 "get_zone_info": false, 00:09:43.483 "zone_management": false, 00:09:43.483 "zone_append": false, 00:09:43.483 "compare": false, 00:09:43.483 "compare_and_write": false, 00:09:43.483 "abort": false, 00:09:43.483 "seek_hole": false, 00:09:43.483 "seek_data": false, 00:09:43.483 "copy": false, 00:09:43.483 "nvme_iov_md": false 00:09:43.483 }, 00:09:43.483 "memory_domains": [ 00:09:43.483 { 00:09:43.483 "dma_device_id": "system", 00:09:43.483 "dma_device_type": 1 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.483 "dma_device_type": 2 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "dma_device_id": "system", 00:09:43.483 "dma_device_type": 1 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.483 "dma_device_type": 2 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "dma_device_id": "system", 00:09:43.483 "dma_device_type": 1 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.483 "dma_device_type": 2 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "dma_device_id": "system", 00:09:43.483 "dma_device_type": 1 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.483 "dma_device_type": 2 00:09:43.483 } 00:09:43.483 ], 00:09:43.483 "driver_specific": { 00:09:43.483 "raid": { 00:09:43.483 "uuid": "26a3e44d-514d-41bc-9563-9b2273476adf", 00:09:43.483 "strip_size_kb": 64, 00:09:43.483 "state": "online", 00:09:43.483 "raid_level": "raid0", 00:09:43.483 "superblock": false, 00:09:43.483 "num_base_bdevs": 4, 00:09:43.483 "num_base_bdevs_discovered": 4, 00:09:43.483 "num_base_bdevs_operational": 4, 00:09:43.483 "base_bdevs_list": [ 00:09:43.483 { 00:09:43.483 "name": "BaseBdev1", 00:09:43.483 "uuid": "74654cd6-e315-4f2d-97dc-001e71727dfa", 00:09:43.483 "is_configured": true, 00:09:43.483 "data_offset": 0, 00:09:43.483 "data_size": 65536 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "name": "BaseBdev2", 00:09:43.483 "uuid": "04df8e45-982a-48b5-b1a8-5d37363db0b0", 00:09:43.483 "is_configured": true, 00:09:43.483 "data_offset": 0, 00:09:43.483 "data_size": 65536 00:09:43.483 }, 00:09:43.483 { 00:09:43.483 "name": "BaseBdev3", 00:09:43.483 "uuid": "ad5371f7-e82e-44ae-a7f6-980057067093", 00:09:43.483 "is_configured": true, 00:09:43.483 "data_offset": 0, 00:09:43.483 "data_size": 65536 00:09:43.484 }, 00:09:43.484 { 00:09:43.484 "name": "BaseBdev4", 00:09:43.484 "uuid": "c07d845e-8391-4256-bb98-2cc9043db2ff", 00:09:43.484 "is_configured": true, 00:09:43.484 "data_offset": 0, 00:09:43.484 "data_size": 65536 00:09:43.484 } 00:09:43.484 ] 00:09:43.484 } 00:09:43.484 } 00:09:43.484 }' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:43.484 BaseBdev2 00:09:43.484 BaseBdev3 00:09:43.484 BaseBdev4' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.484 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.743 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.744 [2024-10-09 01:29:42.464485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.744 [2024-10-09 01:29:42.464555] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.744 [2024-10-09 01:29:42.464639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.744 "name": "Existed_Raid", 00:09:43.744 "uuid": "26a3e44d-514d-41bc-9563-9b2273476adf", 00:09:43.744 "strip_size_kb": 64, 00:09:43.744 "state": "offline", 00:09:43.744 "raid_level": "raid0", 00:09:43.744 "superblock": false, 00:09:43.744 "num_base_bdevs": 4, 00:09:43.744 "num_base_bdevs_discovered": 3, 00:09:43.744 "num_base_bdevs_operational": 3, 00:09:43.744 "base_bdevs_list": [ 00:09:43.744 { 00:09:43.744 "name": null, 00:09:43.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.744 "is_configured": false, 00:09:43.744 "data_offset": 0, 00:09:43.744 "data_size": 65536 00:09:43.744 }, 00:09:43.744 { 00:09:43.744 "name": "BaseBdev2", 00:09:43.744 "uuid": "04df8e45-982a-48b5-b1a8-5d37363db0b0", 00:09:43.744 "is_configured": true, 00:09:43.744 "data_offset": 0, 00:09:43.744 "data_size": 65536 00:09:43.744 }, 00:09:43.744 { 00:09:43.744 "name": "BaseBdev3", 00:09:43.744 "uuid": "ad5371f7-e82e-44ae-a7f6-980057067093", 00:09:43.744 "is_configured": true, 00:09:43.744 "data_offset": 0, 00:09:43.744 "data_size": 65536 00:09:43.744 }, 00:09:43.744 { 00:09:43.744 "name": "BaseBdev4", 00:09:43.744 "uuid": "c07d845e-8391-4256-bb98-2cc9043db2ff", 00:09:43.744 "is_configured": true, 00:09:43.744 "data_offset": 0, 00:09:43.744 "data_size": 65536 00:09:43.744 } 00:09:43.744 ] 00:09:43.744 }' 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.744 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.312 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:44.313 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.313 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.313 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.313 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.313 01:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.313 01:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.313 [2024-10-09 01:29:43.028980] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.313 [2024-10-09 01:29:43.109280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.313 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.313 [2024-10-09 01:29:43.185615] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:44.313 [2024-10-09 01:29:43.185677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.573 BaseBdev2 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:44.573 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.574 [ 00:09:44.574 { 00:09:44.574 "name": "BaseBdev2", 00:09:44.574 "aliases": [ 00:09:44.574 "cd95626a-0d9f-45a3-886d-0b956db398f4" 00:09:44.574 ], 00:09:44.574 "product_name": "Malloc disk", 00:09:44.574 "block_size": 512, 00:09:44.574 "num_blocks": 65536, 00:09:44.574 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:44.574 "assigned_rate_limits": { 00:09:44.574 "rw_ios_per_sec": 0, 00:09:44.574 "rw_mbytes_per_sec": 0, 00:09:44.574 "r_mbytes_per_sec": 0, 00:09:44.574 "w_mbytes_per_sec": 0 00:09:44.574 }, 00:09:44.574 "claimed": false, 00:09:44.574 "zoned": false, 00:09:44.574 "supported_io_types": { 00:09:44.574 "read": true, 00:09:44.574 "write": true, 00:09:44.574 "unmap": true, 00:09:44.574 "flush": true, 00:09:44.574 "reset": true, 00:09:44.574 "nvme_admin": false, 00:09:44.574 "nvme_io": false, 00:09:44.574 "nvme_io_md": false, 00:09:44.574 "write_zeroes": true, 00:09:44.574 "zcopy": true, 00:09:44.574 "get_zone_info": false, 00:09:44.574 "zone_management": false, 00:09:44.574 "zone_append": false, 00:09:44.574 "compare": false, 00:09:44.574 "compare_and_write": false, 00:09:44.574 "abort": true, 00:09:44.574 "seek_hole": false, 00:09:44.574 "seek_data": false, 00:09:44.574 "copy": true, 00:09:44.574 "nvme_iov_md": false 00:09:44.574 }, 00:09:44.574 "memory_domains": [ 00:09:44.574 { 00:09:44.574 "dma_device_id": "system", 00:09:44.574 "dma_device_type": 1 00:09:44.574 }, 00:09:44.574 { 00:09:44.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.574 "dma_device_type": 2 00:09:44.574 } 00:09:44.574 ], 00:09:44.574 "driver_specific": {} 00:09:44.574 } 00:09:44.574 ] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.574 BaseBdev3 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.574 [ 00:09:44.574 { 00:09:44.574 "name": "BaseBdev3", 00:09:44.574 "aliases": [ 00:09:44.574 "ce34414b-8d8d-47b5-bdfc-f93c532907aa" 00:09:44.574 ], 00:09:44.574 "product_name": "Malloc disk", 00:09:44.574 "block_size": 512, 00:09:44.574 "num_blocks": 65536, 00:09:44.574 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:44.574 "assigned_rate_limits": { 00:09:44.574 "rw_ios_per_sec": 0, 00:09:44.574 "rw_mbytes_per_sec": 0, 00:09:44.574 "r_mbytes_per_sec": 0, 00:09:44.574 "w_mbytes_per_sec": 0 00:09:44.574 }, 00:09:44.574 "claimed": false, 00:09:44.574 "zoned": false, 00:09:44.574 "supported_io_types": { 00:09:44.574 "read": true, 00:09:44.574 "write": true, 00:09:44.574 "unmap": true, 00:09:44.574 "flush": true, 00:09:44.574 "reset": true, 00:09:44.574 "nvme_admin": false, 00:09:44.574 "nvme_io": false, 00:09:44.574 "nvme_io_md": false, 00:09:44.574 "write_zeroes": true, 00:09:44.574 "zcopy": true, 00:09:44.574 "get_zone_info": false, 00:09:44.574 "zone_management": false, 00:09:44.574 "zone_append": false, 00:09:44.574 "compare": false, 00:09:44.574 "compare_and_write": false, 00:09:44.574 "abort": true, 00:09:44.574 "seek_hole": false, 00:09:44.574 "seek_data": false, 00:09:44.574 "copy": true, 00:09:44.574 "nvme_iov_md": false 00:09:44.574 }, 00:09:44.574 "memory_domains": [ 00:09:44.574 { 00:09:44.574 "dma_device_id": "system", 00:09:44.574 "dma_device_type": 1 00:09:44.574 }, 00:09:44.574 { 00:09:44.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.574 "dma_device_type": 2 00:09:44.574 } 00:09:44.574 ], 00:09:44.574 "driver_specific": {} 00:09:44.574 } 00:09:44.574 ] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.574 BaseBdev4 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.574 [ 00:09:44.574 { 00:09:44.574 "name": "BaseBdev4", 00:09:44.574 "aliases": [ 00:09:44.574 "073dbb43-a026-4a14-b049-0a68d74654c3" 00:09:44.574 ], 00:09:44.574 "product_name": "Malloc disk", 00:09:44.574 "block_size": 512, 00:09:44.574 "num_blocks": 65536, 00:09:44.574 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:44.574 "assigned_rate_limits": { 00:09:44.574 "rw_ios_per_sec": 0, 00:09:44.574 "rw_mbytes_per_sec": 0, 00:09:44.574 "r_mbytes_per_sec": 0, 00:09:44.574 "w_mbytes_per_sec": 0 00:09:44.574 }, 00:09:44.574 "claimed": false, 00:09:44.574 "zoned": false, 00:09:44.574 "supported_io_types": { 00:09:44.574 "read": true, 00:09:44.574 "write": true, 00:09:44.574 "unmap": true, 00:09:44.574 "flush": true, 00:09:44.574 "reset": true, 00:09:44.574 "nvme_admin": false, 00:09:44.574 "nvme_io": false, 00:09:44.574 "nvme_io_md": false, 00:09:44.574 "write_zeroes": true, 00:09:44.574 "zcopy": true, 00:09:44.574 "get_zone_info": false, 00:09:44.574 "zone_management": false, 00:09:44.574 "zone_append": false, 00:09:44.574 "compare": false, 00:09:44.574 "compare_and_write": false, 00:09:44.574 "abort": true, 00:09:44.574 "seek_hole": false, 00:09:44.574 "seek_data": false, 00:09:44.574 "copy": true, 00:09:44.574 "nvme_iov_md": false 00:09:44.574 }, 00:09:44.574 "memory_domains": [ 00:09:44.574 { 00:09:44.574 "dma_device_id": "system", 00:09:44.574 "dma_device_type": 1 00:09:44.574 }, 00:09:44.574 { 00:09:44.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.574 "dma_device_type": 2 00:09:44.574 } 00:09:44.574 ], 00:09:44.574 "driver_specific": {} 00:09:44.574 } 00:09:44.574 ] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.574 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.575 [2024-10-09 01:29:43.438542] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.575 [2024-10-09 01:29:43.438629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.575 [2024-10-09 01:29:43.438668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.575 [2024-10-09 01:29:43.440776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.575 [2024-10-09 01:29:43.440858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.575 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.834 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.834 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.834 "name": "Existed_Raid", 00:09:44.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.834 "strip_size_kb": 64, 00:09:44.834 "state": "configuring", 00:09:44.834 "raid_level": "raid0", 00:09:44.834 "superblock": false, 00:09:44.834 "num_base_bdevs": 4, 00:09:44.834 "num_base_bdevs_discovered": 3, 00:09:44.834 "num_base_bdevs_operational": 4, 00:09:44.834 "base_bdevs_list": [ 00:09:44.834 { 00:09:44.834 "name": "BaseBdev1", 00:09:44.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.834 "is_configured": false, 00:09:44.834 "data_offset": 0, 00:09:44.834 "data_size": 0 00:09:44.834 }, 00:09:44.834 { 00:09:44.834 "name": "BaseBdev2", 00:09:44.834 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:44.834 "is_configured": true, 00:09:44.834 "data_offset": 0, 00:09:44.834 "data_size": 65536 00:09:44.834 }, 00:09:44.834 { 00:09:44.834 "name": "BaseBdev3", 00:09:44.834 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:44.834 "is_configured": true, 00:09:44.834 "data_offset": 0, 00:09:44.834 "data_size": 65536 00:09:44.834 }, 00:09:44.834 { 00:09:44.834 "name": "BaseBdev4", 00:09:44.834 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:44.834 "is_configured": true, 00:09:44.834 "data_offset": 0, 00:09:44.834 "data_size": 65536 00:09:44.834 } 00:09:44.834 ] 00:09:44.834 }' 00:09:44.834 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.834 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.093 [2024-10-09 01:29:43.906683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.093 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.093 "name": "Existed_Raid", 00:09:45.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.093 "strip_size_kb": 64, 00:09:45.093 "state": "configuring", 00:09:45.093 "raid_level": "raid0", 00:09:45.093 "superblock": false, 00:09:45.093 "num_base_bdevs": 4, 00:09:45.093 "num_base_bdevs_discovered": 2, 00:09:45.093 "num_base_bdevs_operational": 4, 00:09:45.093 "base_bdevs_list": [ 00:09:45.093 { 00:09:45.093 "name": "BaseBdev1", 00:09:45.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.093 "is_configured": false, 00:09:45.093 "data_offset": 0, 00:09:45.093 "data_size": 0 00:09:45.093 }, 00:09:45.093 { 00:09:45.093 "name": null, 00:09:45.093 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:45.093 "is_configured": false, 00:09:45.093 "data_offset": 0, 00:09:45.093 "data_size": 65536 00:09:45.093 }, 00:09:45.093 { 00:09:45.093 "name": "BaseBdev3", 00:09:45.093 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:45.093 "is_configured": true, 00:09:45.094 "data_offset": 0, 00:09:45.094 "data_size": 65536 00:09:45.094 }, 00:09:45.094 { 00:09:45.094 "name": "BaseBdev4", 00:09:45.094 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:45.094 "is_configured": true, 00:09:45.094 "data_offset": 0, 00:09:45.094 "data_size": 65536 00:09:45.094 } 00:09:45.094 ] 00:09:45.094 }' 00:09:45.094 01:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.094 01:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 [2024-10-09 01:29:44.391463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.662 BaseBdev1 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 [ 00:09:45.662 { 00:09:45.662 "name": "BaseBdev1", 00:09:45.662 "aliases": [ 00:09:45.662 "e09f9104-0e72-438a-bf84-2ff77f82d179" 00:09:45.662 ], 00:09:45.662 "product_name": "Malloc disk", 00:09:45.662 "block_size": 512, 00:09:45.662 "num_blocks": 65536, 00:09:45.662 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:45.662 "assigned_rate_limits": { 00:09:45.662 "rw_ios_per_sec": 0, 00:09:45.662 "rw_mbytes_per_sec": 0, 00:09:45.662 "r_mbytes_per_sec": 0, 00:09:45.662 "w_mbytes_per_sec": 0 00:09:45.662 }, 00:09:45.662 "claimed": true, 00:09:45.662 "claim_type": "exclusive_write", 00:09:45.662 "zoned": false, 00:09:45.662 "supported_io_types": { 00:09:45.662 "read": true, 00:09:45.662 "write": true, 00:09:45.662 "unmap": true, 00:09:45.662 "flush": true, 00:09:45.662 "reset": true, 00:09:45.662 "nvme_admin": false, 00:09:45.662 "nvme_io": false, 00:09:45.662 "nvme_io_md": false, 00:09:45.662 "write_zeroes": true, 00:09:45.662 "zcopy": true, 00:09:45.662 "get_zone_info": false, 00:09:45.662 "zone_management": false, 00:09:45.662 "zone_append": false, 00:09:45.662 "compare": false, 00:09:45.662 "compare_and_write": false, 00:09:45.662 "abort": true, 00:09:45.662 "seek_hole": false, 00:09:45.662 "seek_data": false, 00:09:45.662 "copy": true, 00:09:45.662 "nvme_iov_md": false 00:09:45.662 }, 00:09:45.662 "memory_domains": [ 00:09:45.662 { 00:09:45.662 "dma_device_id": "system", 00:09:45.662 "dma_device_type": 1 00:09:45.662 }, 00:09:45.662 { 00:09:45.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.662 "dma_device_type": 2 00:09:45.662 } 00:09:45.662 ], 00:09:45.662 "driver_specific": {} 00:09:45.662 } 00:09:45.662 ] 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.662 "name": "Existed_Raid", 00:09:45.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.662 "strip_size_kb": 64, 00:09:45.662 "state": "configuring", 00:09:45.662 "raid_level": "raid0", 00:09:45.662 "superblock": false, 00:09:45.662 "num_base_bdevs": 4, 00:09:45.662 "num_base_bdevs_discovered": 3, 00:09:45.662 "num_base_bdevs_operational": 4, 00:09:45.662 "base_bdevs_list": [ 00:09:45.662 { 00:09:45.662 "name": "BaseBdev1", 00:09:45.662 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:45.662 "is_configured": true, 00:09:45.662 "data_offset": 0, 00:09:45.662 "data_size": 65536 00:09:45.662 }, 00:09:45.662 { 00:09:45.662 "name": null, 00:09:45.662 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:45.662 "is_configured": false, 00:09:45.662 "data_offset": 0, 00:09:45.662 "data_size": 65536 00:09:45.662 }, 00:09:45.662 { 00:09:45.662 "name": "BaseBdev3", 00:09:45.662 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:45.662 "is_configured": true, 00:09:45.662 "data_offset": 0, 00:09:45.662 "data_size": 65536 00:09:45.662 }, 00:09:45.662 { 00:09:45.662 "name": "BaseBdev4", 00:09:45.662 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:45.662 "is_configured": true, 00:09:45.662 "data_offset": 0, 00:09:45.662 "data_size": 65536 00:09:45.662 } 00:09:45.662 ] 00:09:45.662 }' 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.662 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.232 [2024-10-09 01:29:44.911650] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.232 "name": "Existed_Raid", 00:09:46.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.232 "strip_size_kb": 64, 00:09:46.232 "state": "configuring", 00:09:46.232 "raid_level": "raid0", 00:09:46.232 "superblock": false, 00:09:46.232 "num_base_bdevs": 4, 00:09:46.232 "num_base_bdevs_discovered": 2, 00:09:46.232 "num_base_bdevs_operational": 4, 00:09:46.232 "base_bdevs_list": [ 00:09:46.232 { 00:09:46.232 "name": "BaseBdev1", 00:09:46.232 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:46.232 "is_configured": true, 00:09:46.232 "data_offset": 0, 00:09:46.232 "data_size": 65536 00:09:46.232 }, 00:09:46.232 { 00:09:46.232 "name": null, 00:09:46.232 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:46.232 "is_configured": false, 00:09:46.232 "data_offset": 0, 00:09:46.232 "data_size": 65536 00:09:46.232 }, 00:09:46.232 { 00:09:46.232 "name": null, 00:09:46.232 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:46.232 "is_configured": false, 00:09:46.232 "data_offset": 0, 00:09:46.232 "data_size": 65536 00:09:46.232 }, 00:09:46.232 { 00:09:46.232 "name": "BaseBdev4", 00:09:46.232 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:46.232 "is_configured": true, 00:09:46.232 "data_offset": 0, 00:09:46.232 "data_size": 65536 00:09:46.232 } 00:09:46.232 ] 00:09:46.232 }' 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.232 01:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.492 [2024-10-09 01:29:45.363796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.492 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.751 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.751 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.751 "name": "Existed_Raid", 00:09:46.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.751 "strip_size_kb": 64, 00:09:46.751 "state": "configuring", 00:09:46.751 "raid_level": "raid0", 00:09:46.751 "superblock": false, 00:09:46.752 "num_base_bdevs": 4, 00:09:46.752 "num_base_bdevs_discovered": 3, 00:09:46.752 "num_base_bdevs_operational": 4, 00:09:46.752 "base_bdevs_list": [ 00:09:46.752 { 00:09:46.752 "name": "BaseBdev1", 00:09:46.752 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:46.752 "is_configured": true, 00:09:46.752 "data_offset": 0, 00:09:46.752 "data_size": 65536 00:09:46.752 }, 00:09:46.752 { 00:09:46.752 "name": null, 00:09:46.752 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:46.752 "is_configured": false, 00:09:46.752 "data_offset": 0, 00:09:46.752 "data_size": 65536 00:09:46.752 }, 00:09:46.752 { 00:09:46.752 "name": "BaseBdev3", 00:09:46.752 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:46.752 "is_configured": true, 00:09:46.752 "data_offset": 0, 00:09:46.752 "data_size": 65536 00:09:46.752 }, 00:09:46.752 { 00:09:46.752 "name": "BaseBdev4", 00:09:46.752 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:46.752 "is_configured": true, 00:09:46.752 "data_offset": 0, 00:09:46.752 "data_size": 65536 00:09:46.752 } 00:09:46.752 ] 00:09:46.752 }' 00:09:46.752 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.752 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.011 [2024-10-09 01:29:45.867960] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.011 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.281 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.281 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.281 "name": "Existed_Raid", 00:09:47.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.281 "strip_size_kb": 64, 00:09:47.281 "state": "configuring", 00:09:47.281 "raid_level": "raid0", 00:09:47.281 "superblock": false, 00:09:47.281 "num_base_bdevs": 4, 00:09:47.281 "num_base_bdevs_discovered": 2, 00:09:47.281 "num_base_bdevs_operational": 4, 00:09:47.281 "base_bdevs_list": [ 00:09:47.281 { 00:09:47.281 "name": null, 00:09:47.281 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:47.281 "is_configured": false, 00:09:47.281 "data_offset": 0, 00:09:47.281 "data_size": 65536 00:09:47.281 }, 00:09:47.281 { 00:09:47.281 "name": null, 00:09:47.281 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:47.281 "is_configured": false, 00:09:47.281 "data_offset": 0, 00:09:47.281 "data_size": 65536 00:09:47.281 }, 00:09:47.281 { 00:09:47.281 "name": "BaseBdev3", 00:09:47.281 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:47.281 "is_configured": true, 00:09:47.281 "data_offset": 0, 00:09:47.281 "data_size": 65536 00:09:47.281 }, 00:09:47.281 { 00:09:47.282 "name": "BaseBdev4", 00:09:47.282 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:47.282 "is_configured": true, 00:09:47.282 "data_offset": 0, 00:09:47.282 "data_size": 65536 00:09:47.282 } 00:09:47.282 ] 00:09:47.282 }' 00:09:47.282 01:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.282 01:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.554 [2024-10-09 01:29:46.367686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.554 "name": "Existed_Raid", 00:09:47.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.554 "strip_size_kb": 64, 00:09:47.554 "state": "configuring", 00:09:47.554 "raid_level": "raid0", 00:09:47.554 "superblock": false, 00:09:47.554 "num_base_bdevs": 4, 00:09:47.554 "num_base_bdevs_discovered": 3, 00:09:47.554 "num_base_bdevs_operational": 4, 00:09:47.554 "base_bdevs_list": [ 00:09:47.554 { 00:09:47.554 "name": null, 00:09:47.554 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:47.554 "is_configured": false, 00:09:47.554 "data_offset": 0, 00:09:47.554 "data_size": 65536 00:09:47.554 }, 00:09:47.554 { 00:09:47.554 "name": "BaseBdev2", 00:09:47.554 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:47.554 "is_configured": true, 00:09:47.554 "data_offset": 0, 00:09:47.554 "data_size": 65536 00:09:47.554 }, 00:09:47.554 { 00:09:47.554 "name": "BaseBdev3", 00:09:47.554 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:47.554 "is_configured": true, 00:09:47.554 "data_offset": 0, 00:09:47.554 "data_size": 65536 00:09:47.554 }, 00:09:47.554 { 00:09:47.554 "name": "BaseBdev4", 00:09:47.554 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:47.554 "is_configured": true, 00:09:47.554 "data_offset": 0, 00:09:47.554 "data_size": 65536 00:09:47.554 } 00:09:47.554 ] 00:09:47.554 }' 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.554 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e09f9104-0e72-438a-bf84-2ff77f82d179 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.122 [2024-10-09 01:29:46.936439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:48.122 [2024-10-09 01:29:46.936557] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.122 [2024-10-09 01:29:46.936589] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:48.122 [2024-10-09 01:29:46.936938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:09:48.122 [2024-10-09 01:29:46.937126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.122 [2024-10-09 01:29:46.937167] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:48.122 [2024-10-09 01:29:46.937409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.122 NewBaseBdev 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.122 [ 00:09:48.122 { 00:09:48.122 "name": "NewBaseBdev", 00:09:48.122 "aliases": [ 00:09:48.122 "e09f9104-0e72-438a-bf84-2ff77f82d179" 00:09:48.122 ], 00:09:48.122 "product_name": "Malloc disk", 00:09:48.122 "block_size": 512, 00:09:48.122 "num_blocks": 65536, 00:09:48.122 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:48.122 "assigned_rate_limits": { 00:09:48.122 "rw_ios_per_sec": 0, 00:09:48.122 "rw_mbytes_per_sec": 0, 00:09:48.122 "r_mbytes_per_sec": 0, 00:09:48.122 "w_mbytes_per_sec": 0 00:09:48.122 }, 00:09:48.122 "claimed": true, 00:09:48.122 "claim_type": "exclusive_write", 00:09:48.122 "zoned": false, 00:09:48.122 "supported_io_types": { 00:09:48.122 "read": true, 00:09:48.122 "write": true, 00:09:48.122 "unmap": true, 00:09:48.122 "flush": true, 00:09:48.122 "reset": true, 00:09:48.122 "nvme_admin": false, 00:09:48.122 "nvme_io": false, 00:09:48.122 "nvme_io_md": false, 00:09:48.122 "write_zeroes": true, 00:09:48.122 "zcopy": true, 00:09:48.122 "get_zone_info": false, 00:09:48.122 "zone_management": false, 00:09:48.122 "zone_append": false, 00:09:48.122 "compare": false, 00:09:48.122 "compare_and_write": false, 00:09:48.122 "abort": true, 00:09:48.122 "seek_hole": false, 00:09:48.122 "seek_data": false, 00:09:48.122 "copy": true, 00:09:48.122 "nvme_iov_md": false 00:09:48.122 }, 00:09:48.122 "memory_domains": [ 00:09:48.122 { 00:09:48.122 "dma_device_id": "system", 00:09:48.122 "dma_device_type": 1 00:09:48.122 }, 00:09:48.122 { 00:09:48.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.122 "dma_device_type": 2 00:09:48.122 } 00:09:48.122 ], 00:09:48.122 "driver_specific": {} 00:09:48.122 } 00:09:48.122 ] 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.122 01:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.382 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.382 "name": "Existed_Raid", 00:09:48.382 "uuid": "b92bc089-d5b2-4ff6-bfb4-f00304b0b620", 00:09:48.382 "strip_size_kb": 64, 00:09:48.382 "state": "online", 00:09:48.382 "raid_level": "raid0", 00:09:48.382 "superblock": false, 00:09:48.382 "num_base_bdevs": 4, 00:09:48.382 "num_base_bdevs_discovered": 4, 00:09:48.382 "num_base_bdevs_operational": 4, 00:09:48.382 "base_bdevs_list": [ 00:09:48.382 { 00:09:48.382 "name": "NewBaseBdev", 00:09:48.382 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:48.382 "is_configured": true, 00:09:48.382 "data_offset": 0, 00:09:48.382 "data_size": 65536 00:09:48.382 }, 00:09:48.382 { 00:09:48.382 "name": "BaseBdev2", 00:09:48.382 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:48.382 "is_configured": true, 00:09:48.382 "data_offset": 0, 00:09:48.382 "data_size": 65536 00:09:48.382 }, 00:09:48.382 { 00:09:48.382 "name": "BaseBdev3", 00:09:48.382 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:48.382 "is_configured": true, 00:09:48.382 "data_offset": 0, 00:09:48.382 "data_size": 65536 00:09:48.382 }, 00:09:48.382 { 00:09:48.382 "name": "BaseBdev4", 00:09:48.382 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:48.382 "is_configured": true, 00:09:48.382 "data_offset": 0, 00:09:48.382 "data_size": 65536 00:09:48.382 } 00:09:48.382 ] 00:09:48.382 }' 00:09:48.382 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.382 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.641 [2024-10-09 01:29:47.424925] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.641 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:48.641 "name": "Existed_Raid", 00:09:48.641 "aliases": [ 00:09:48.641 "b92bc089-d5b2-4ff6-bfb4-f00304b0b620" 00:09:48.641 ], 00:09:48.641 "product_name": "Raid Volume", 00:09:48.641 "block_size": 512, 00:09:48.641 "num_blocks": 262144, 00:09:48.641 "uuid": "b92bc089-d5b2-4ff6-bfb4-f00304b0b620", 00:09:48.641 "assigned_rate_limits": { 00:09:48.641 "rw_ios_per_sec": 0, 00:09:48.641 "rw_mbytes_per_sec": 0, 00:09:48.641 "r_mbytes_per_sec": 0, 00:09:48.641 "w_mbytes_per_sec": 0 00:09:48.641 }, 00:09:48.641 "claimed": false, 00:09:48.641 "zoned": false, 00:09:48.641 "supported_io_types": { 00:09:48.641 "read": true, 00:09:48.641 "write": true, 00:09:48.641 "unmap": true, 00:09:48.641 "flush": true, 00:09:48.641 "reset": true, 00:09:48.641 "nvme_admin": false, 00:09:48.641 "nvme_io": false, 00:09:48.641 "nvme_io_md": false, 00:09:48.641 "write_zeroes": true, 00:09:48.641 "zcopy": false, 00:09:48.641 "get_zone_info": false, 00:09:48.641 "zone_management": false, 00:09:48.641 "zone_append": false, 00:09:48.641 "compare": false, 00:09:48.641 "compare_and_write": false, 00:09:48.641 "abort": false, 00:09:48.641 "seek_hole": false, 00:09:48.641 "seek_data": false, 00:09:48.641 "copy": false, 00:09:48.641 "nvme_iov_md": false 00:09:48.641 }, 00:09:48.641 "memory_domains": [ 00:09:48.641 { 00:09:48.641 "dma_device_id": "system", 00:09:48.641 "dma_device_type": 1 00:09:48.641 }, 00:09:48.641 { 00:09:48.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.641 "dma_device_type": 2 00:09:48.641 }, 00:09:48.641 { 00:09:48.641 "dma_device_id": "system", 00:09:48.641 "dma_device_type": 1 00:09:48.641 }, 00:09:48.641 { 00:09:48.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.641 "dma_device_type": 2 00:09:48.641 }, 00:09:48.641 { 00:09:48.641 "dma_device_id": "system", 00:09:48.641 "dma_device_type": 1 00:09:48.641 }, 00:09:48.641 { 00:09:48.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.641 "dma_device_type": 2 00:09:48.641 }, 00:09:48.641 { 00:09:48.641 "dma_device_id": "system", 00:09:48.641 "dma_device_type": 1 00:09:48.641 }, 00:09:48.641 { 00:09:48.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.641 "dma_device_type": 2 00:09:48.641 } 00:09:48.641 ], 00:09:48.641 "driver_specific": { 00:09:48.641 "raid": { 00:09:48.641 "uuid": "b92bc089-d5b2-4ff6-bfb4-f00304b0b620", 00:09:48.641 "strip_size_kb": 64, 00:09:48.641 "state": "online", 00:09:48.641 "raid_level": "raid0", 00:09:48.641 "superblock": false, 00:09:48.641 "num_base_bdevs": 4, 00:09:48.641 "num_base_bdevs_discovered": 4, 00:09:48.641 "num_base_bdevs_operational": 4, 00:09:48.641 "base_bdevs_list": [ 00:09:48.641 { 00:09:48.642 "name": "NewBaseBdev", 00:09:48.642 "uuid": "e09f9104-0e72-438a-bf84-2ff77f82d179", 00:09:48.642 "is_configured": true, 00:09:48.642 "data_offset": 0, 00:09:48.642 "data_size": 65536 00:09:48.642 }, 00:09:48.642 { 00:09:48.642 "name": "BaseBdev2", 00:09:48.642 "uuid": "cd95626a-0d9f-45a3-886d-0b956db398f4", 00:09:48.642 "is_configured": true, 00:09:48.642 "data_offset": 0, 00:09:48.642 "data_size": 65536 00:09:48.642 }, 00:09:48.642 { 00:09:48.642 "name": "BaseBdev3", 00:09:48.642 "uuid": "ce34414b-8d8d-47b5-bdfc-f93c532907aa", 00:09:48.642 "is_configured": true, 00:09:48.642 "data_offset": 0, 00:09:48.642 "data_size": 65536 00:09:48.642 }, 00:09:48.642 { 00:09:48.642 "name": "BaseBdev4", 00:09:48.642 "uuid": "073dbb43-a026-4a14-b049-0a68d74654c3", 00:09:48.642 "is_configured": true, 00:09:48.642 "data_offset": 0, 00:09:48.642 "data_size": 65536 00:09:48.642 } 00:09:48.642 ] 00:09:48.642 } 00:09:48.642 } 00:09:48.642 }' 00:09:48.642 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:48.642 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:48.642 BaseBdev2 00:09:48.642 BaseBdev3 00:09:48.642 BaseBdev4' 00:09:48.642 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.901 [2024-10-09 01:29:47.756709] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.901 [2024-10-09 01:29:47.756770] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.901 [2024-10-09 01:29:47.756869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.901 [2024-10-09 01:29:47.756956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.901 [2024-10-09 01:29:47.757008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81467 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 81467 ']' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 81467 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.901 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81467 00:09:49.160 killing process with pid 81467 00:09:49.160 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.160 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.160 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81467' 00:09:49.160 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 81467 00:09:49.160 [2024-10-09 01:29:47.804012] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.160 01:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 81467 00:09:49.160 [2024-10-09 01:29:47.878785] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.419 01:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.419 00:09:49.419 real 0m9.910s 00:09:49.419 user 0m16.609s 00:09:49.419 sys 0m2.166s 00:09:49.419 ************************************ 00:09:49.419 END TEST raid_state_function_test 00:09:49.419 ************************************ 00:09:49.419 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.419 01:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.419 01:29:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:49.419 01:29:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:49.419 01:29:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.419 01:29:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.679 ************************************ 00:09:49.679 START TEST raid_state_function_test_sb 00:09:49.679 ************************************ 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82127 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82127' 00:09:49.679 Process raid pid: 82127 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82127 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82127 ']' 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.679 01:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.679 [2024-10-09 01:29:48.415421] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:49.679 [2024-10-09 01:29:48.415635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.679 [2024-10-09 01:29:48.548157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:49.938 [2024-10-09 01:29:48.576064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.938 [2024-10-09 01:29:48.645026] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.938 [2024-10-09 01:29:48.720733] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.938 [2024-10-09 01:29:48.720774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.507 [2024-10-09 01:29:49.240497] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.507 [2024-10-09 01:29:49.240619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.507 [2024-10-09 01:29:49.240664] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.507 [2024-10-09 01:29:49.240686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.507 [2024-10-09 01:29:49.240720] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.507 [2024-10-09 01:29:49.240740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.507 [2024-10-09 01:29:49.240760] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:50.507 [2024-10-09 01:29:49.240791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.507 "name": "Existed_Raid", 00:09:50.507 "uuid": "f8017d3d-f4ce-41e1-b3f6-3cedeb31c8b2", 00:09:50.507 "strip_size_kb": 64, 00:09:50.507 "state": "configuring", 00:09:50.507 "raid_level": "raid0", 00:09:50.507 "superblock": true, 00:09:50.507 "num_base_bdevs": 4, 00:09:50.507 "num_base_bdevs_discovered": 0, 00:09:50.507 "num_base_bdevs_operational": 4, 00:09:50.507 "base_bdevs_list": [ 00:09:50.507 { 00:09:50.507 "name": "BaseBdev1", 00:09:50.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.507 "is_configured": false, 00:09:50.507 "data_offset": 0, 00:09:50.507 "data_size": 0 00:09:50.507 }, 00:09:50.507 { 00:09:50.507 "name": "BaseBdev2", 00:09:50.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.507 "is_configured": false, 00:09:50.507 "data_offset": 0, 00:09:50.507 "data_size": 0 00:09:50.507 }, 00:09:50.507 { 00:09:50.507 "name": "BaseBdev3", 00:09:50.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.507 "is_configured": false, 00:09:50.507 "data_offset": 0, 00:09:50.507 "data_size": 0 00:09:50.507 }, 00:09:50.507 { 00:09:50.507 "name": "BaseBdev4", 00:09:50.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.507 "is_configured": false, 00:09:50.507 "data_offset": 0, 00:09:50.507 "data_size": 0 00:09:50.507 } 00:09:50.507 ] 00:09:50.507 }' 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.507 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.076 [2024-10-09 01:29:49.664481] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.076 [2024-10-09 01:29:49.664575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.076 [2024-10-09 01:29:49.676511] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.076 [2024-10-09 01:29:49.676590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.076 [2024-10-09 01:29:49.676619] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.076 [2024-10-09 01:29:49.676640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.076 [2024-10-09 01:29:49.676671] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.076 [2024-10-09 01:29:49.676689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.076 [2024-10-09 01:29:49.676708] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:51.076 [2024-10-09 01:29:49.676716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.076 [2024-10-09 01:29:49.703436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.076 BaseBdev1 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.076 [ 00:09:51.076 { 00:09:51.076 "name": "BaseBdev1", 00:09:51.076 "aliases": [ 00:09:51.076 "0cf5a9e6-7174-40d7-83a0-cc569dc1c57f" 00:09:51.076 ], 00:09:51.076 "product_name": "Malloc disk", 00:09:51.076 "block_size": 512, 00:09:51.076 "num_blocks": 65536, 00:09:51.076 "uuid": "0cf5a9e6-7174-40d7-83a0-cc569dc1c57f", 00:09:51.076 "assigned_rate_limits": { 00:09:51.076 "rw_ios_per_sec": 0, 00:09:51.076 "rw_mbytes_per_sec": 0, 00:09:51.076 "r_mbytes_per_sec": 0, 00:09:51.076 "w_mbytes_per_sec": 0 00:09:51.076 }, 00:09:51.076 "claimed": true, 00:09:51.076 "claim_type": "exclusive_write", 00:09:51.076 "zoned": false, 00:09:51.076 "supported_io_types": { 00:09:51.076 "read": true, 00:09:51.076 "write": true, 00:09:51.076 "unmap": true, 00:09:51.076 "flush": true, 00:09:51.076 "reset": true, 00:09:51.076 "nvme_admin": false, 00:09:51.076 "nvme_io": false, 00:09:51.076 "nvme_io_md": false, 00:09:51.076 "write_zeroes": true, 00:09:51.076 "zcopy": true, 00:09:51.076 "get_zone_info": false, 00:09:51.076 "zone_management": false, 00:09:51.076 "zone_append": false, 00:09:51.076 "compare": false, 00:09:51.076 "compare_and_write": false, 00:09:51.076 "abort": true, 00:09:51.076 "seek_hole": false, 00:09:51.076 "seek_data": false, 00:09:51.076 "copy": true, 00:09:51.076 "nvme_iov_md": false 00:09:51.076 }, 00:09:51.076 "memory_domains": [ 00:09:51.076 { 00:09:51.076 "dma_device_id": "system", 00:09:51.076 "dma_device_type": 1 00:09:51.076 }, 00:09:51.076 { 00:09:51.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.076 "dma_device_type": 2 00:09:51.076 } 00:09:51.076 ], 00:09:51.076 "driver_specific": {} 00:09:51.076 } 00:09:51.076 ] 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.076 "name": "Existed_Raid", 00:09:51.076 "uuid": "c04e9212-6d8a-46a6-bf88-4c3606911ca4", 00:09:51.076 "strip_size_kb": 64, 00:09:51.076 "state": "configuring", 00:09:51.076 "raid_level": "raid0", 00:09:51.076 "superblock": true, 00:09:51.076 "num_base_bdevs": 4, 00:09:51.076 "num_base_bdevs_discovered": 1, 00:09:51.076 "num_base_bdevs_operational": 4, 00:09:51.076 "base_bdevs_list": [ 00:09:51.076 { 00:09:51.076 "name": "BaseBdev1", 00:09:51.076 "uuid": "0cf5a9e6-7174-40d7-83a0-cc569dc1c57f", 00:09:51.076 "is_configured": true, 00:09:51.076 "data_offset": 2048, 00:09:51.076 "data_size": 63488 00:09:51.076 }, 00:09:51.076 { 00:09:51.076 "name": "BaseBdev2", 00:09:51.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.076 "is_configured": false, 00:09:51.076 "data_offset": 0, 00:09:51.076 "data_size": 0 00:09:51.076 }, 00:09:51.076 { 00:09:51.076 "name": "BaseBdev3", 00:09:51.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.076 "is_configured": false, 00:09:51.076 "data_offset": 0, 00:09:51.076 "data_size": 0 00:09:51.076 }, 00:09:51.076 { 00:09:51.076 "name": "BaseBdev4", 00:09:51.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.076 "is_configured": false, 00:09:51.076 "data_offset": 0, 00:09:51.076 "data_size": 0 00:09:51.076 } 00:09:51.076 ] 00:09:51.076 }' 00:09:51.076 01:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.077 01:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.336 [2024-10-09 01:29:50.187614] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.336 [2024-10-09 01:29:50.187709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.336 [2024-10-09 01:29:50.199657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.336 [2024-10-09 01:29:50.201852] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.336 [2024-10-09 01:29:50.201918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.336 [2024-10-09 01:29:50.201946] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.336 [2024-10-09 01:29:50.201966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.336 [2024-10-09 01:29:50.201984] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:51.336 [2024-10-09 01:29:50.202002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.336 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.595 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.596 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.596 "name": "Existed_Raid", 00:09:51.596 "uuid": "eb6a3e55-57b6-4de0-b452-702abe48c590", 00:09:51.596 "strip_size_kb": 64, 00:09:51.596 "state": "configuring", 00:09:51.596 "raid_level": "raid0", 00:09:51.596 "superblock": true, 00:09:51.596 "num_base_bdevs": 4, 00:09:51.596 "num_base_bdevs_discovered": 1, 00:09:51.596 "num_base_bdevs_operational": 4, 00:09:51.596 "base_bdevs_list": [ 00:09:51.596 { 00:09:51.596 "name": "BaseBdev1", 00:09:51.596 "uuid": "0cf5a9e6-7174-40d7-83a0-cc569dc1c57f", 00:09:51.596 "is_configured": true, 00:09:51.596 "data_offset": 2048, 00:09:51.596 "data_size": 63488 00:09:51.596 }, 00:09:51.596 { 00:09:51.596 "name": "BaseBdev2", 00:09:51.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.596 "is_configured": false, 00:09:51.596 "data_offset": 0, 00:09:51.596 "data_size": 0 00:09:51.596 }, 00:09:51.596 { 00:09:51.596 "name": "BaseBdev3", 00:09:51.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.596 "is_configured": false, 00:09:51.596 "data_offset": 0, 00:09:51.596 "data_size": 0 00:09:51.596 }, 00:09:51.596 { 00:09:51.596 "name": "BaseBdev4", 00:09:51.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.596 "is_configured": false, 00:09:51.596 "data_offset": 0, 00:09:51.596 "data_size": 0 00:09:51.596 } 00:09:51.596 ] 00:09:51.596 }' 00:09:51.596 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.596 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.855 [2024-10-09 01:29:50.659339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.855 BaseBdev2 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.855 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.855 [ 00:09:51.855 { 00:09:51.855 "name": "BaseBdev2", 00:09:51.855 "aliases": [ 00:09:51.855 "6e11b0a9-957b-467d-8b93-b98eeb79bcd7" 00:09:51.855 ], 00:09:51.855 "product_name": "Malloc disk", 00:09:51.855 "block_size": 512, 00:09:51.855 "num_blocks": 65536, 00:09:51.856 "uuid": "6e11b0a9-957b-467d-8b93-b98eeb79bcd7", 00:09:51.856 "assigned_rate_limits": { 00:09:51.856 "rw_ios_per_sec": 0, 00:09:51.856 "rw_mbytes_per_sec": 0, 00:09:51.856 "r_mbytes_per_sec": 0, 00:09:51.856 "w_mbytes_per_sec": 0 00:09:51.856 }, 00:09:51.856 "claimed": true, 00:09:51.856 "claim_type": "exclusive_write", 00:09:51.856 "zoned": false, 00:09:51.856 "supported_io_types": { 00:09:51.856 "read": true, 00:09:51.856 "write": true, 00:09:51.856 "unmap": true, 00:09:51.856 "flush": true, 00:09:51.856 "reset": true, 00:09:51.856 "nvme_admin": false, 00:09:51.856 "nvme_io": false, 00:09:51.856 "nvme_io_md": false, 00:09:51.856 "write_zeroes": true, 00:09:51.856 "zcopy": true, 00:09:51.856 "get_zone_info": false, 00:09:51.856 "zone_management": false, 00:09:51.856 "zone_append": false, 00:09:51.856 "compare": false, 00:09:51.856 "compare_and_write": false, 00:09:51.856 "abort": true, 00:09:51.856 "seek_hole": false, 00:09:51.856 "seek_data": false, 00:09:51.856 "copy": true, 00:09:51.856 "nvme_iov_md": false 00:09:51.856 }, 00:09:51.856 "memory_domains": [ 00:09:51.856 { 00:09:51.856 "dma_device_id": "system", 00:09:51.856 "dma_device_type": 1 00:09:51.856 }, 00:09:51.856 { 00:09:51.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.856 "dma_device_type": 2 00:09:51.856 } 00:09:51.856 ], 00:09:51.856 "driver_specific": {} 00:09:51.856 } 00:09:51.856 ] 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.856 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.115 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.115 "name": "Existed_Raid", 00:09:52.115 "uuid": "eb6a3e55-57b6-4de0-b452-702abe48c590", 00:09:52.115 "strip_size_kb": 64, 00:09:52.115 "state": "configuring", 00:09:52.115 "raid_level": "raid0", 00:09:52.115 "superblock": true, 00:09:52.115 "num_base_bdevs": 4, 00:09:52.115 "num_base_bdevs_discovered": 2, 00:09:52.115 "num_base_bdevs_operational": 4, 00:09:52.115 "base_bdevs_list": [ 00:09:52.115 { 00:09:52.115 "name": "BaseBdev1", 00:09:52.115 "uuid": "0cf5a9e6-7174-40d7-83a0-cc569dc1c57f", 00:09:52.115 "is_configured": true, 00:09:52.115 "data_offset": 2048, 00:09:52.115 "data_size": 63488 00:09:52.115 }, 00:09:52.115 { 00:09:52.115 "name": "BaseBdev2", 00:09:52.115 "uuid": "6e11b0a9-957b-467d-8b93-b98eeb79bcd7", 00:09:52.115 "is_configured": true, 00:09:52.115 "data_offset": 2048, 00:09:52.115 "data_size": 63488 00:09:52.115 }, 00:09:52.115 { 00:09:52.115 "name": "BaseBdev3", 00:09:52.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.115 "is_configured": false, 00:09:52.115 "data_offset": 0, 00:09:52.115 "data_size": 0 00:09:52.115 }, 00:09:52.115 { 00:09:52.115 "name": "BaseBdev4", 00:09:52.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.115 "is_configured": false, 00:09:52.115 "data_offset": 0, 00:09:52.115 "data_size": 0 00:09:52.115 } 00:09:52.115 ] 00:09:52.115 }' 00:09:52.115 01:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.115 01:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.375 [2024-10-09 01:29:51.192128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.375 BaseBdev3 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.375 [ 00:09:52.375 { 00:09:52.375 "name": "BaseBdev3", 00:09:52.375 "aliases": [ 00:09:52.375 "ace41037-6ab0-4895-8858-a4ed322b8ca5" 00:09:52.375 ], 00:09:52.375 "product_name": "Malloc disk", 00:09:52.375 "block_size": 512, 00:09:52.375 "num_blocks": 65536, 00:09:52.375 "uuid": "ace41037-6ab0-4895-8858-a4ed322b8ca5", 00:09:52.375 "assigned_rate_limits": { 00:09:52.375 "rw_ios_per_sec": 0, 00:09:52.375 "rw_mbytes_per_sec": 0, 00:09:52.375 "r_mbytes_per_sec": 0, 00:09:52.375 "w_mbytes_per_sec": 0 00:09:52.375 }, 00:09:52.375 "claimed": true, 00:09:52.375 "claim_type": "exclusive_write", 00:09:52.375 "zoned": false, 00:09:52.375 "supported_io_types": { 00:09:52.375 "read": true, 00:09:52.375 "write": true, 00:09:52.375 "unmap": true, 00:09:52.375 "flush": true, 00:09:52.375 "reset": true, 00:09:52.375 "nvme_admin": false, 00:09:52.375 "nvme_io": false, 00:09:52.375 "nvme_io_md": false, 00:09:52.375 "write_zeroes": true, 00:09:52.375 "zcopy": true, 00:09:52.375 "get_zone_info": false, 00:09:52.375 "zone_management": false, 00:09:52.375 "zone_append": false, 00:09:52.375 "compare": false, 00:09:52.375 "compare_and_write": false, 00:09:52.375 "abort": true, 00:09:52.375 "seek_hole": false, 00:09:52.375 "seek_data": false, 00:09:52.375 "copy": true, 00:09:52.375 "nvme_iov_md": false 00:09:52.375 }, 00:09:52.375 "memory_domains": [ 00:09:52.375 { 00:09:52.375 "dma_device_id": "system", 00:09:52.375 "dma_device_type": 1 00:09:52.375 }, 00:09:52.375 { 00:09:52.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.375 "dma_device_type": 2 00:09:52.375 } 00:09:52.375 ], 00:09:52.375 "driver_specific": {} 00:09:52.375 } 00:09:52.375 ] 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.375 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.635 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.635 "name": "Existed_Raid", 00:09:52.635 "uuid": "eb6a3e55-57b6-4de0-b452-702abe48c590", 00:09:52.635 "strip_size_kb": 64, 00:09:52.635 "state": "configuring", 00:09:52.635 "raid_level": "raid0", 00:09:52.635 "superblock": true, 00:09:52.635 "num_base_bdevs": 4, 00:09:52.635 "num_base_bdevs_discovered": 3, 00:09:52.635 "num_base_bdevs_operational": 4, 00:09:52.635 "base_bdevs_list": [ 00:09:52.635 { 00:09:52.635 "name": "BaseBdev1", 00:09:52.635 "uuid": "0cf5a9e6-7174-40d7-83a0-cc569dc1c57f", 00:09:52.635 "is_configured": true, 00:09:52.635 "data_offset": 2048, 00:09:52.635 "data_size": 63488 00:09:52.635 }, 00:09:52.635 { 00:09:52.635 "name": "BaseBdev2", 00:09:52.635 "uuid": "6e11b0a9-957b-467d-8b93-b98eeb79bcd7", 00:09:52.635 "is_configured": true, 00:09:52.635 "data_offset": 2048, 00:09:52.635 "data_size": 63488 00:09:52.635 }, 00:09:52.635 { 00:09:52.635 "name": "BaseBdev3", 00:09:52.635 "uuid": "ace41037-6ab0-4895-8858-a4ed322b8ca5", 00:09:52.635 "is_configured": true, 00:09:52.635 "data_offset": 2048, 00:09:52.635 "data_size": 63488 00:09:52.635 }, 00:09:52.635 { 00:09:52.635 "name": "BaseBdev4", 00:09:52.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.635 "is_configured": false, 00:09:52.635 "data_offset": 0, 00:09:52.635 "data_size": 0 00:09:52.635 } 00:09:52.635 ] 00:09:52.635 }' 00:09:52.635 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.635 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.895 [2024-10-09 01:29:51.705033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:52.895 [2024-10-09 01:29:51.705354] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.895 [2024-10-09 01:29:51.705419] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.895 BaseBdev4 00:09:52.895 [2024-10-09 01:29:51.705777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:52.895 [2024-10-09 01:29:51.705919] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.895 [2024-10-09 01:29:51.705936] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:52.895 [2024-10-09 01:29:51.706074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.895 [ 00:09:52.895 { 00:09:52.895 "name": "BaseBdev4", 00:09:52.895 "aliases": [ 00:09:52.895 "22f92762-d936-4d53-8b28-3e0bbe835307" 00:09:52.895 ], 00:09:52.895 "product_name": "Malloc disk", 00:09:52.895 "block_size": 512, 00:09:52.895 "num_blocks": 65536, 00:09:52.895 "uuid": "22f92762-d936-4d53-8b28-3e0bbe835307", 00:09:52.895 "assigned_rate_limits": { 00:09:52.895 "rw_ios_per_sec": 0, 00:09:52.895 "rw_mbytes_per_sec": 0, 00:09:52.895 "r_mbytes_per_sec": 0, 00:09:52.895 "w_mbytes_per_sec": 0 00:09:52.895 }, 00:09:52.895 "claimed": true, 00:09:52.895 "claim_type": "exclusive_write", 00:09:52.895 "zoned": false, 00:09:52.895 "supported_io_types": { 00:09:52.895 "read": true, 00:09:52.895 "write": true, 00:09:52.895 "unmap": true, 00:09:52.895 "flush": true, 00:09:52.895 "reset": true, 00:09:52.895 "nvme_admin": false, 00:09:52.895 "nvme_io": false, 00:09:52.895 "nvme_io_md": false, 00:09:52.895 "write_zeroes": true, 00:09:52.895 "zcopy": true, 00:09:52.895 "get_zone_info": false, 00:09:52.895 "zone_management": false, 00:09:52.895 "zone_append": false, 00:09:52.895 "compare": false, 00:09:52.895 "compare_and_write": false, 00:09:52.895 "abort": true, 00:09:52.895 "seek_hole": false, 00:09:52.895 "seek_data": false, 00:09:52.895 "copy": true, 00:09:52.895 "nvme_iov_md": false 00:09:52.895 }, 00:09:52.895 "memory_domains": [ 00:09:52.895 { 00:09:52.895 "dma_device_id": "system", 00:09:52.895 "dma_device_type": 1 00:09:52.895 }, 00:09:52.895 { 00:09:52.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.895 "dma_device_type": 2 00:09:52.895 } 00:09:52.895 ], 00:09:52.895 "driver_specific": {} 00:09:52.895 } 00:09:52.895 ] 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.895 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.154 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.154 "name": "Existed_Raid", 00:09:53.154 "uuid": "eb6a3e55-57b6-4de0-b452-702abe48c590", 00:09:53.154 "strip_size_kb": 64, 00:09:53.154 "state": "online", 00:09:53.154 "raid_level": "raid0", 00:09:53.154 "superblock": true, 00:09:53.154 "num_base_bdevs": 4, 00:09:53.154 "num_base_bdevs_discovered": 4, 00:09:53.154 "num_base_bdevs_operational": 4, 00:09:53.154 "base_bdevs_list": [ 00:09:53.154 { 00:09:53.154 "name": "BaseBdev1", 00:09:53.154 "uuid": "0cf5a9e6-7174-40d7-83a0-cc569dc1c57f", 00:09:53.154 "is_configured": true, 00:09:53.154 "data_offset": 2048, 00:09:53.154 "data_size": 63488 00:09:53.154 }, 00:09:53.154 { 00:09:53.154 "name": "BaseBdev2", 00:09:53.154 "uuid": "6e11b0a9-957b-467d-8b93-b98eeb79bcd7", 00:09:53.154 "is_configured": true, 00:09:53.154 "data_offset": 2048, 00:09:53.154 "data_size": 63488 00:09:53.154 }, 00:09:53.154 { 00:09:53.154 "name": "BaseBdev3", 00:09:53.154 "uuid": "ace41037-6ab0-4895-8858-a4ed322b8ca5", 00:09:53.154 "is_configured": true, 00:09:53.154 "data_offset": 2048, 00:09:53.154 "data_size": 63488 00:09:53.154 }, 00:09:53.154 { 00:09:53.154 "name": "BaseBdev4", 00:09:53.154 "uuid": "22f92762-d936-4d53-8b28-3e0bbe835307", 00:09:53.154 "is_configured": true, 00:09:53.154 "data_offset": 2048, 00:09:53.154 "data_size": 63488 00:09:53.154 } 00:09:53.154 ] 00:09:53.154 }' 00:09:53.154 01:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.154 01:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.413 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.413 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.413 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.413 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.414 [2024-10-09 01:29:52.189510] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.414 "name": "Existed_Raid", 00:09:53.414 "aliases": [ 00:09:53.414 "eb6a3e55-57b6-4de0-b452-702abe48c590" 00:09:53.414 ], 00:09:53.414 "product_name": "Raid Volume", 00:09:53.414 "block_size": 512, 00:09:53.414 "num_blocks": 253952, 00:09:53.414 "uuid": "eb6a3e55-57b6-4de0-b452-702abe48c590", 00:09:53.414 "assigned_rate_limits": { 00:09:53.414 "rw_ios_per_sec": 0, 00:09:53.414 "rw_mbytes_per_sec": 0, 00:09:53.414 "r_mbytes_per_sec": 0, 00:09:53.414 "w_mbytes_per_sec": 0 00:09:53.414 }, 00:09:53.414 "claimed": false, 00:09:53.414 "zoned": false, 00:09:53.414 "supported_io_types": { 00:09:53.414 "read": true, 00:09:53.414 "write": true, 00:09:53.414 "unmap": true, 00:09:53.414 "flush": true, 00:09:53.414 "reset": true, 00:09:53.414 "nvme_admin": false, 00:09:53.414 "nvme_io": false, 00:09:53.414 "nvme_io_md": false, 00:09:53.414 "write_zeroes": true, 00:09:53.414 "zcopy": false, 00:09:53.414 "get_zone_info": false, 00:09:53.414 "zone_management": false, 00:09:53.414 "zone_append": false, 00:09:53.414 "compare": false, 00:09:53.414 "compare_and_write": false, 00:09:53.414 "abort": false, 00:09:53.414 "seek_hole": false, 00:09:53.414 "seek_data": false, 00:09:53.414 "copy": false, 00:09:53.414 "nvme_iov_md": false 00:09:53.414 }, 00:09:53.414 "memory_domains": [ 00:09:53.414 { 00:09:53.414 "dma_device_id": "system", 00:09:53.414 "dma_device_type": 1 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.414 "dma_device_type": 2 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "dma_device_id": "system", 00:09:53.414 "dma_device_type": 1 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.414 "dma_device_type": 2 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "dma_device_id": "system", 00:09:53.414 "dma_device_type": 1 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.414 "dma_device_type": 2 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "dma_device_id": "system", 00:09:53.414 "dma_device_type": 1 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.414 "dma_device_type": 2 00:09:53.414 } 00:09:53.414 ], 00:09:53.414 "driver_specific": { 00:09:53.414 "raid": { 00:09:53.414 "uuid": "eb6a3e55-57b6-4de0-b452-702abe48c590", 00:09:53.414 "strip_size_kb": 64, 00:09:53.414 "state": "online", 00:09:53.414 "raid_level": "raid0", 00:09:53.414 "superblock": true, 00:09:53.414 "num_base_bdevs": 4, 00:09:53.414 "num_base_bdevs_discovered": 4, 00:09:53.414 "num_base_bdevs_operational": 4, 00:09:53.414 "base_bdevs_list": [ 00:09:53.414 { 00:09:53.414 "name": "BaseBdev1", 00:09:53.414 "uuid": "0cf5a9e6-7174-40d7-83a0-cc569dc1c57f", 00:09:53.414 "is_configured": true, 00:09:53.414 "data_offset": 2048, 00:09:53.414 "data_size": 63488 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "name": "BaseBdev2", 00:09:53.414 "uuid": "6e11b0a9-957b-467d-8b93-b98eeb79bcd7", 00:09:53.414 "is_configured": true, 00:09:53.414 "data_offset": 2048, 00:09:53.414 "data_size": 63488 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "name": "BaseBdev3", 00:09:53.414 "uuid": "ace41037-6ab0-4895-8858-a4ed322b8ca5", 00:09:53.414 "is_configured": true, 00:09:53.414 "data_offset": 2048, 00:09:53.414 "data_size": 63488 00:09:53.414 }, 00:09:53.414 { 00:09:53.414 "name": "BaseBdev4", 00:09:53.414 "uuid": "22f92762-d936-4d53-8b28-3e0bbe835307", 00:09:53.414 "is_configured": true, 00:09:53.414 "data_offset": 2048, 00:09:53.414 "data_size": 63488 00:09:53.414 } 00:09:53.414 ] 00:09:53.414 } 00:09:53.414 } 00:09:53.414 }' 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:53.414 BaseBdev2 00:09:53.414 BaseBdev3 00:09:53.414 BaseBdev4' 00:09:53.414 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.674 [2024-10-09 01:29:52.517326] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.674 [2024-10-09 01:29:52.517350] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.674 [2024-10-09 01:29:52.517425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.674 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.934 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.934 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.934 "name": "Existed_Raid", 00:09:53.934 "uuid": "eb6a3e55-57b6-4de0-b452-702abe48c590", 00:09:53.934 "strip_size_kb": 64, 00:09:53.934 "state": "offline", 00:09:53.934 "raid_level": "raid0", 00:09:53.934 "superblock": true, 00:09:53.934 "num_base_bdevs": 4, 00:09:53.934 "num_base_bdevs_discovered": 3, 00:09:53.934 "num_base_bdevs_operational": 3, 00:09:53.934 "base_bdevs_list": [ 00:09:53.934 { 00:09:53.934 "name": null, 00:09:53.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.934 "is_configured": false, 00:09:53.934 "data_offset": 0, 00:09:53.934 "data_size": 63488 00:09:53.934 }, 00:09:53.934 { 00:09:53.934 "name": "BaseBdev2", 00:09:53.934 "uuid": "6e11b0a9-957b-467d-8b93-b98eeb79bcd7", 00:09:53.934 "is_configured": true, 00:09:53.934 "data_offset": 2048, 00:09:53.934 "data_size": 63488 00:09:53.934 }, 00:09:53.934 { 00:09:53.934 "name": "BaseBdev3", 00:09:53.934 "uuid": "ace41037-6ab0-4895-8858-a4ed322b8ca5", 00:09:53.934 "is_configured": true, 00:09:53.934 "data_offset": 2048, 00:09:53.934 "data_size": 63488 00:09:53.934 }, 00:09:53.934 { 00:09:53.934 "name": "BaseBdev4", 00:09:53.934 "uuid": "22f92762-d936-4d53-8b28-3e0bbe835307", 00:09:53.934 "is_configured": true, 00:09:53.934 "data_offset": 2048, 00:09:53.934 "data_size": 63488 00:09:53.934 } 00:09:53.934 ] 00:09:53.934 }' 00:09:53.934 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.934 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.193 01:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.193 [2024-10-09 01:29:53.001933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.193 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.193 [2024-10-09 01:29:53.082345] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.453 [2024-10-09 01:29:53.162565] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:54.453 [2024-10-09 01:29:53.162669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.453 BaseBdev2 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.453 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.454 [ 00:09:54.454 { 00:09:54.454 "name": "BaseBdev2", 00:09:54.454 "aliases": [ 00:09:54.454 "a022d0fe-46a5-4640-93c0-6d8611f87c5a" 00:09:54.454 ], 00:09:54.454 "product_name": "Malloc disk", 00:09:54.454 "block_size": 512, 00:09:54.454 "num_blocks": 65536, 00:09:54.454 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:54.454 "assigned_rate_limits": { 00:09:54.454 "rw_ios_per_sec": 0, 00:09:54.454 "rw_mbytes_per_sec": 0, 00:09:54.454 "r_mbytes_per_sec": 0, 00:09:54.454 "w_mbytes_per_sec": 0 00:09:54.454 }, 00:09:54.454 "claimed": false, 00:09:54.454 "zoned": false, 00:09:54.454 "supported_io_types": { 00:09:54.454 "read": true, 00:09:54.454 "write": true, 00:09:54.454 "unmap": true, 00:09:54.454 "flush": true, 00:09:54.454 "reset": true, 00:09:54.454 "nvme_admin": false, 00:09:54.454 "nvme_io": false, 00:09:54.454 "nvme_io_md": false, 00:09:54.454 "write_zeroes": true, 00:09:54.454 "zcopy": true, 00:09:54.454 "get_zone_info": false, 00:09:54.454 "zone_management": false, 00:09:54.454 "zone_append": false, 00:09:54.454 "compare": false, 00:09:54.454 "compare_and_write": false, 00:09:54.454 "abort": true, 00:09:54.454 "seek_hole": false, 00:09:54.454 "seek_data": false, 00:09:54.454 "copy": true, 00:09:54.454 "nvme_iov_md": false 00:09:54.454 }, 00:09:54.454 "memory_domains": [ 00:09:54.454 { 00:09:54.454 "dma_device_id": "system", 00:09:54.454 "dma_device_type": 1 00:09:54.454 }, 00:09:54.454 { 00:09:54.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.454 "dma_device_type": 2 00:09:54.454 } 00:09:54.454 ], 00:09:54.454 "driver_specific": {} 00:09:54.454 } 00:09:54.454 ] 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.454 BaseBdev3 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.454 [ 00:09:54.454 { 00:09:54.454 "name": "BaseBdev3", 00:09:54.454 "aliases": [ 00:09:54.454 "6e99129c-3e98-4978-98e4-a3b3b6e85678" 00:09:54.454 ], 00:09:54.454 "product_name": "Malloc disk", 00:09:54.454 "block_size": 512, 00:09:54.454 "num_blocks": 65536, 00:09:54.454 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:54.454 "assigned_rate_limits": { 00:09:54.454 "rw_ios_per_sec": 0, 00:09:54.454 "rw_mbytes_per_sec": 0, 00:09:54.454 "r_mbytes_per_sec": 0, 00:09:54.454 "w_mbytes_per_sec": 0 00:09:54.454 }, 00:09:54.454 "claimed": false, 00:09:54.454 "zoned": false, 00:09:54.454 "supported_io_types": { 00:09:54.454 "read": true, 00:09:54.454 "write": true, 00:09:54.454 "unmap": true, 00:09:54.454 "flush": true, 00:09:54.454 "reset": true, 00:09:54.454 "nvme_admin": false, 00:09:54.454 "nvme_io": false, 00:09:54.454 "nvme_io_md": false, 00:09:54.454 "write_zeroes": true, 00:09:54.454 "zcopy": true, 00:09:54.454 "get_zone_info": false, 00:09:54.454 "zone_management": false, 00:09:54.454 "zone_append": false, 00:09:54.454 "compare": false, 00:09:54.454 "compare_and_write": false, 00:09:54.454 "abort": true, 00:09:54.454 "seek_hole": false, 00:09:54.454 "seek_data": false, 00:09:54.454 "copy": true, 00:09:54.454 "nvme_iov_md": false 00:09:54.454 }, 00:09:54.454 "memory_domains": [ 00:09:54.454 { 00:09:54.454 "dma_device_id": "system", 00:09:54.454 "dma_device_type": 1 00:09:54.454 }, 00:09:54.454 { 00:09:54.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.454 "dma_device_type": 2 00:09:54.454 } 00:09:54.454 ], 00:09:54.454 "driver_specific": {} 00:09:54.454 } 00:09:54.454 ] 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.454 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.715 BaseBdev4 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.715 [ 00:09:54.715 { 00:09:54.715 "name": "BaseBdev4", 00:09:54.715 "aliases": [ 00:09:54.715 "22f445f0-c087-43f6-ad9c-ce4257a3651c" 00:09:54.715 ], 00:09:54.715 "product_name": "Malloc disk", 00:09:54.715 "block_size": 512, 00:09:54.715 "num_blocks": 65536, 00:09:54.715 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:54.715 "assigned_rate_limits": { 00:09:54.715 "rw_ios_per_sec": 0, 00:09:54.715 "rw_mbytes_per_sec": 0, 00:09:54.715 "r_mbytes_per_sec": 0, 00:09:54.715 "w_mbytes_per_sec": 0 00:09:54.715 }, 00:09:54.715 "claimed": false, 00:09:54.715 "zoned": false, 00:09:54.715 "supported_io_types": { 00:09:54.715 "read": true, 00:09:54.715 "write": true, 00:09:54.715 "unmap": true, 00:09:54.715 "flush": true, 00:09:54.715 "reset": true, 00:09:54.715 "nvme_admin": false, 00:09:54.715 "nvme_io": false, 00:09:54.715 "nvme_io_md": false, 00:09:54.715 "write_zeroes": true, 00:09:54.715 "zcopy": true, 00:09:54.715 "get_zone_info": false, 00:09:54.715 "zone_management": false, 00:09:54.715 "zone_append": false, 00:09:54.715 "compare": false, 00:09:54.715 "compare_and_write": false, 00:09:54.715 "abort": true, 00:09:54.715 "seek_hole": false, 00:09:54.715 "seek_data": false, 00:09:54.715 "copy": true, 00:09:54.715 "nvme_iov_md": false 00:09:54.715 }, 00:09:54.715 "memory_domains": [ 00:09:54.715 { 00:09:54.715 "dma_device_id": "system", 00:09:54.715 "dma_device_type": 1 00:09:54.715 }, 00:09:54.715 { 00:09:54.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.715 "dma_device_type": 2 00:09:54.715 } 00:09:54.715 ], 00:09:54.715 "driver_specific": {} 00:09:54.715 } 00:09:54.715 ] 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.715 [2024-10-09 01:29:53.398603] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.715 [2024-10-09 01:29:53.398691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.715 [2024-10-09 01:29:53.398731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.715 [2024-10-09 01:29:53.400852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:54.715 [2024-10-09 01:29:53.400937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.715 "name": "Existed_Raid", 00:09:54.715 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:54.715 "strip_size_kb": 64, 00:09:54.715 "state": "configuring", 00:09:54.715 "raid_level": "raid0", 00:09:54.715 "superblock": true, 00:09:54.715 "num_base_bdevs": 4, 00:09:54.715 "num_base_bdevs_discovered": 3, 00:09:54.715 "num_base_bdevs_operational": 4, 00:09:54.715 "base_bdevs_list": [ 00:09:54.715 { 00:09:54.715 "name": "BaseBdev1", 00:09:54.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.715 "is_configured": false, 00:09:54.715 "data_offset": 0, 00:09:54.715 "data_size": 0 00:09:54.715 }, 00:09:54.715 { 00:09:54.715 "name": "BaseBdev2", 00:09:54.715 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:54.715 "is_configured": true, 00:09:54.715 "data_offset": 2048, 00:09:54.715 "data_size": 63488 00:09:54.715 }, 00:09:54.715 { 00:09:54.715 "name": "BaseBdev3", 00:09:54.715 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:54.715 "is_configured": true, 00:09:54.715 "data_offset": 2048, 00:09:54.715 "data_size": 63488 00:09:54.715 }, 00:09:54.715 { 00:09:54.715 "name": "BaseBdev4", 00:09:54.715 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:54.715 "is_configured": true, 00:09:54.715 "data_offset": 2048, 00:09:54.715 "data_size": 63488 00:09:54.715 } 00:09:54.715 ] 00:09:54.715 }' 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.715 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.975 [2024-10-09 01:29:53.846733] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.975 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.237 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.237 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.237 "name": "Existed_Raid", 00:09:55.237 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:55.237 "strip_size_kb": 64, 00:09:55.237 "state": "configuring", 00:09:55.237 "raid_level": "raid0", 00:09:55.237 "superblock": true, 00:09:55.237 "num_base_bdevs": 4, 00:09:55.237 "num_base_bdevs_discovered": 2, 00:09:55.237 "num_base_bdevs_operational": 4, 00:09:55.237 "base_bdevs_list": [ 00:09:55.237 { 00:09:55.237 "name": "BaseBdev1", 00:09:55.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.237 "is_configured": false, 00:09:55.237 "data_offset": 0, 00:09:55.237 "data_size": 0 00:09:55.237 }, 00:09:55.237 { 00:09:55.237 "name": null, 00:09:55.237 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:55.237 "is_configured": false, 00:09:55.237 "data_offset": 0, 00:09:55.237 "data_size": 63488 00:09:55.237 }, 00:09:55.237 { 00:09:55.237 "name": "BaseBdev3", 00:09:55.237 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:55.237 "is_configured": true, 00:09:55.237 "data_offset": 2048, 00:09:55.237 "data_size": 63488 00:09:55.237 }, 00:09:55.237 { 00:09:55.237 "name": "BaseBdev4", 00:09:55.237 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:55.237 "is_configured": true, 00:09:55.237 "data_offset": 2048, 00:09:55.237 "data_size": 63488 00:09:55.237 } 00:09:55.237 ] 00:09:55.237 }' 00:09:55.237 01:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.237 01:29:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.496 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 [2024-10-09 01:29:54.331583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.497 BaseBdev1 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 [ 00:09:55.497 { 00:09:55.497 "name": "BaseBdev1", 00:09:55.497 "aliases": [ 00:09:55.497 "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4" 00:09:55.497 ], 00:09:55.497 "product_name": "Malloc disk", 00:09:55.497 "block_size": 512, 00:09:55.497 "num_blocks": 65536, 00:09:55.497 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:55.497 "assigned_rate_limits": { 00:09:55.497 "rw_ios_per_sec": 0, 00:09:55.497 "rw_mbytes_per_sec": 0, 00:09:55.497 "r_mbytes_per_sec": 0, 00:09:55.497 "w_mbytes_per_sec": 0 00:09:55.497 }, 00:09:55.497 "claimed": true, 00:09:55.497 "claim_type": "exclusive_write", 00:09:55.497 "zoned": false, 00:09:55.497 "supported_io_types": { 00:09:55.497 "read": true, 00:09:55.497 "write": true, 00:09:55.497 "unmap": true, 00:09:55.497 "flush": true, 00:09:55.497 "reset": true, 00:09:55.497 "nvme_admin": false, 00:09:55.497 "nvme_io": false, 00:09:55.497 "nvme_io_md": false, 00:09:55.497 "write_zeroes": true, 00:09:55.497 "zcopy": true, 00:09:55.497 "get_zone_info": false, 00:09:55.497 "zone_management": false, 00:09:55.497 "zone_append": false, 00:09:55.497 "compare": false, 00:09:55.497 "compare_and_write": false, 00:09:55.497 "abort": true, 00:09:55.497 "seek_hole": false, 00:09:55.497 "seek_data": false, 00:09:55.497 "copy": true, 00:09:55.497 "nvme_iov_md": false 00:09:55.497 }, 00:09:55.497 "memory_domains": [ 00:09:55.497 { 00:09:55.497 "dma_device_id": "system", 00:09:55.497 "dma_device_type": 1 00:09:55.497 }, 00:09:55.497 { 00:09:55.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.497 "dma_device_type": 2 00:09:55.497 } 00:09:55.497 ], 00:09:55.497 "driver_specific": {} 00:09:55.497 } 00:09:55.497 ] 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.497 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.757 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.757 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.757 "name": "Existed_Raid", 00:09:55.757 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:55.757 "strip_size_kb": 64, 00:09:55.757 "state": "configuring", 00:09:55.757 "raid_level": "raid0", 00:09:55.757 "superblock": true, 00:09:55.757 "num_base_bdevs": 4, 00:09:55.757 "num_base_bdevs_discovered": 3, 00:09:55.757 "num_base_bdevs_operational": 4, 00:09:55.757 "base_bdevs_list": [ 00:09:55.757 { 00:09:55.757 "name": "BaseBdev1", 00:09:55.757 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:55.757 "is_configured": true, 00:09:55.757 "data_offset": 2048, 00:09:55.757 "data_size": 63488 00:09:55.757 }, 00:09:55.757 { 00:09:55.757 "name": null, 00:09:55.757 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:55.757 "is_configured": false, 00:09:55.757 "data_offset": 0, 00:09:55.757 "data_size": 63488 00:09:55.757 }, 00:09:55.757 { 00:09:55.757 "name": "BaseBdev3", 00:09:55.757 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:55.757 "is_configured": true, 00:09:55.757 "data_offset": 2048, 00:09:55.757 "data_size": 63488 00:09:55.757 }, 00:09:55.757 { 00:09:55.757 "name": "BaseBdev4", 00:09:55.757 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:55.757 "is_configured": true, 00:09:55.757 "data_offset": 2048, 00:09:55.757 "data_size": 63488 00:09:55.757 } 00:09:55.757 ] 00:09:55.757 }' 00:09:55.757 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.757 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.017 [2024-10-09 01:29:54.855765] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.017 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.276 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.276 "name": "Existed_Raid", 00:09:56.276 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:56.276 "strip_size_kb": 64, 00:09:56.276 "state": "configuring", 00:09:56.276 "raid_level": "raid0", 00:09:56.276 "superblock": true, 00:09:56.276 "num_base_bdevs": 4, 00:09:56.276 "num_base_bdevs_discovered": 2, 00:09:56.276 "num_base_bdevs_operational": 4, 00:09:56.276 "base_bdevs_list": [ 00:09:56.276 { 00:09:56.276 "name": "BaseBdev1", 00:09:56.277 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:56.277 "is_configured": true, 00:09:56.277 "data_offset": 2048, 00:09:56.277 "data_size": 63488 00:09:56.277 }, 00:09:56.277 { 00:09:56.277 "name": null, 00:09:56.277 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:56.277 "is_configured": false, 00:09:56.277 "data_offset": 0, 00:09:56.277 "data_size": 63488 00:09:56.277 }, 00:09:56.277 { 00:09:56.277 "name": null, 00:09:56.277 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:56.277 "is_configured": false, 00:09:56.277 "data_offset": 0, 00:09:56.277 "data_size": 63488 00:09:56.277 }, 00:09:56.277 { 00:09:56.277 "name": "BaseBdev4", 00:09:56.277 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:56.277 "is_configured": true, 00:09:56.277 "data_offset": 2048, 00:09:56.277 "data_size": 63488 00:09:56.277 } 00:09:56.277 ] 00:09:56.277 }' 00:09:56.277 01:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.277 01:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.536 [2024-10-09 01:29:55.379952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.536 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.795 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.795 "name": "Existed_Raid", 00:09:56.795 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:56.795 "strip_size_kb": 64, 00:09:56.795 "state": "configuring", 00:09:56.795 "raid_level": "raid0", 00:09:56.795 "superblock": true, 00:09:56.795 "num_base_bdevs": 4, 00:09:56.795 "num_base_bdevs_discovered": 3, 00:09:56.795 "num_base_bdevs_operational": 4, 00:09:56.795 "base_bdevs_list": [ 00:09:56.795 { 00:09:56.795 "name": "BaseBdev1", 00:09:56.795 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:56.795 "is_configured": true, 00:09:56.795 "data_offset": 2048, 00:09:56.795 "data_size": 63488 00:09:56.795 }, 00:09:56.795 { 00:09:56.795 "name": null, 00:09:56.795 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:56.795 "is_configured": false, 00:09:56.795 "data_offset": 0, 00:09:56.795 "data_size": 63488 00:09:56.795 }, 00:09:56.795 { 00:09:56.795 "name": "BaseBdev3", 00:09:56.795 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:56.795 "is_configured": true, 00:09:56.795 "data_offset": 2048, 00:09:56.795 "data_size": 63488 00:09:56.795 }, 00:09:56.795 { 00:09:56.795 "name": "BaseBdev4", 00:09:56.795 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:56.795 "is_configured": true, 00:09:56.795 "data_offset": 2048, 00:09:56.795 "data_size": 63488 00:09:56.795 } 00:09:56.795 ] 00:09:56.795 }' 00:09:56.795 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.795 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.055 [2024-10-09 01:29:55.920125] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.055 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.313 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.313 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.313 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.313 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.313 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.313 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.313 "name": "Existed_Raid", 00:09:57.313 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:57.313 "strip_size_kb": 64, 00:09:57.313 "state": "configuring", 00:09:57.313 "raid_level": "raid0", 00:09:57.313 "superblock": true, 00:09:57.313 "num_base_bdevs": 4, 00:09:57.313 "num_base_bdevs_discovered": 2, 00:09:57.313 "num_base_bdevs_operational": 4, 00:09:57.313 "base_bdevs_list": [ 00:09:57.313 { 00:09:57.313 "name": null, 00:09:57.313 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:57.314 "is_configured": false, 00:09:57.314 "data_offset": 0, 00:09:57.314 "data_size": 63488 00:09:57.314 }, 00:09:57.314 { 00:09:57.314 "name": null, 00:09:57.314 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:57.314 "is_configured": false, 00:09:57.314 "data_offset": 0, 00:09:57.314 "data_size": 63488 00:09:57.314 }, 00:09:57.314 { 00:09:57.314 "name": "BaseBdev3", 00:09:57.314 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:57.314 "is_configured": true, 00:09:57.314 "data_offset": 2048, 00:09:57.314 "data_size": 63488 00:09:57.314 }, 00:09:57.314 { 00:09:57.314 "name": "BaseBdev4", 00:09:57.314 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:57.314 "is_configured": true, 00:09:57.314 "data_offset": 2048, 00:09:57.314 "data_size": 63488 00:09:57.314 } 00:09:57.314 ] 00:09:57.314 }' 00:09:57.314 01:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.314 01:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.573 [2024-10-09 01:29:56.431793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.573 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.832 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.832 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.832 "name": "Existed_Raid", 00:09:57.832 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:57.832 "strip_size_kb": 64, 00:09:57.832 "state": "configuring", 00:09:57.832 "raid_level": "raid0", 00:09:57.832 "superblock": true, 00:09:57.832 "num_base_bdevs": 4, 00:09:57.832 "num_base_bdevs_discovered": 3, 00:09:57.832 "num_base_bdevs_operational": 4, 00:09:57.832 "base_bdevs_list": [ 00:09:57.832 { 00:09:57.832 "name": null, 00:09:57.832 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:57.832 "is_configured": false, 00:09:57.832 "data_offset": 0, 00:09:57.832 "data_size": 63488 00:09:57.832 }, 00:09:57.832 { 00:09:57.832 "name": "BaseBdev2", 00:09:57.832 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:57.832 "is_configured": true, 00:09:57.832 "data_offset": 2048, 00:09:57.832 "data_size": 63488 00:09:57.832 }, 00:09:57.832 { 00:09:57.832 "name": "BaseBdev3", 00:09:57.832 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:57.832 "is_configured": true, 00:09:57.832 "data_offset": 2048, 00:09:57.832 "data_size": 63488 00:09:57.832 }, 00:09:57.832 { 00:09:57.832 "name": "BaseBdev4", 00:09:57.832 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:57.832 "is_configured": true, 00:09:57.832 "data_offset": 2048, 00:09:57.832 "data_size": 63488 00:09:57.832 } 00:09:57.832 ] 00:09:57.832 }' 00:09:57.832 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.832 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5cb3c17d-4725-4bf7-84d1-766d64a2cbf4 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.091 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.351 [2024-10-09 01:29:56.992568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:58.351 [2024-10-09 01:29:56.992833] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:58.351 [2024-10-09 01:29:56.992885] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:58.351 NewBaseBdev 00:09:58.351 [2024-10-09 01:29:56.993192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:09:58.351 [2024-10-09 01:29:56.993335] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:58.351 [2024-10-09 01:29:56.993349] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:58.351 [2024-10-09 01:29:56.993452] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.351 01:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.351 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.351 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:58.351 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.351 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.351 [ 00:09:58.351 { 00:09:58.351 "name": "NewBaseBdev", 00:09:58.351 "aliases": [ 00:09:58.351 "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4" 00:09:58.351 ], 00:09:58.351 "product_name": "Malloc disk", 00:09:58.351 "block_size": 512, 00:09:58.351 "num_blocks": 65536, 00:09:58.351 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:58.351 "assigned_rate_limits": { 00:09:58.351 "rw_ios_per_sec": 0, 00:09:58.351 "rw_mbytes_per_sec": 0, 00:09:58.351 "r_mbytes_per_sec": 0, 00:09:58.351 "w_mbytes_per_sec": 0 00:09:58.351 }, 00:09:58.351 "claimed": true, 00:09:58.351 "claim_type": "exclusive_write", 00:09:58.351 "zoned": false, 00:09:58.351 "supported_io_types": { 00:09:58.351 "read": true, 00:09:58.351 "write": true, 00:09:58.351 "unmap": true, 00:09:58.351 "flush": true, 00:09:58.351 "reset": true, 00:09:58.351 "nvme_admin": false, 00:09:58.351 "nvme_io": false, 00:09:58.351 "nvme_io_md": false, 00:09:58.351 "write_zeroes": true, 00:09:58.351 "zcopy": true, 00:09:58.351 "get_zone_info": false, 00:09:58.351 "zone_management": false, 00:09:58.351 "zone_append": false, 00:09:58.351 "compare": false, 00:09:58.351 "compare_and_write": false, 00:09:58.351 "abort": true, 00:09:58.351 "seek_hole": false, 00:09:58.351 "seek_data": false, 00:09:58.351 "copy": true, 00:09:58.351 "nvme_iov_md": false 00:09:58.351 }, 00:09:58.351 "memory_domains": [ 00:09:58.351 { 00:09:58.351 "dma_device_id": "system", 00:09:58.351 "dma_device_type": 1 00:09:58.351 }, 00:09:58.351 { 00:09:58.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.351 "dma_device_type": 2 00:09:58.351 } 00:09:58.351 ], 00:09:58.351 "driver_specific": {} 00:09:58.351 } 00:09:58.351 ] 00:09:58.351 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.351 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:58.351 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.352 "name": "Existed_Raid", 00:09:58.352 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:58.352 "strip_size_kb": 64, 00:09:58.352 "state": "online", 00:09:58.352 "raid_level": "raid0", 00:09:58.352 "superblock": true, 00:09:58.352 "num_base_bdevs": 4, 00:09:58.352 "num_base_bdevs_discovered": 4, 00:09:58.352 "num_base_bdevs_operational": 4, 00:09:58.352 "base_bdevs_list": [ 00:09:58.352 { 00:09:58.352 "name": "NewBaseBdev", 00:09:58.352 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:58.352 "is_configured": true, 00:09:58.352 "data_offset": 2048, 00:09:58.352 "data_size": 63488 00:09:58.352 }, 00:09:58.352 { 00:09:58.352 "name": "BaseBdev2", 00:09:58.352 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:58.352 "is_configured": true, 00:09:58.352 "data_offset": 2048, 00:09:58.352 "data_size": 63488 00:09:58.352 }, 00:09:58.352 { 00:09:58.352 "name": "BaseBdev3", 00:09:58.352 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:58.352 "is_configured": true, 00:09:58.352 "data_offset": 2048, 00:09:58.352 "data_size": 63488 00:09:58.352 }, 00:09:58.352 { 00:09:58.352 "name": "BaseBdev4", 00:09:58.352 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:58.352 "is_configured": true, 00:09:58.352 "data_offset": 2048, 00:09:58.352 "data_size": 63488 00:09:58.352 } 00:09:58.352 ] 00:09:58.352 }' 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.352 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.611 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.611 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:58.611 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.870 [2024-10-09 01:29:57.513062] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.870 "name": "Existed_Raid", 00:09:58.870 "aliases": [ 00:09:58.870 "173e0e00-f432-4439-a3f6-a24bb58cc74b" 00:09:58.870 ], 00:09:58.870 "product_name": "Raid Volume", 00:09:58.870 "block_size": 512, 00:09:58.870 "num_blocks": 253952, 00:09:58.870 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:58.870 "assigned_rate_limits": { 00:09:58.870 "rw_ios_per_sec": 0, 00:09:58.870 "rw_mbytes_per_sec": 0, 00:09:58.870 "r_mbytes_per_sec": 0, 00:09:58.870 "w_mbytes_per_sec": 0 00:09:58.870 }, 00:09:58.870 "claimed": false, 00:09:58.870 "zoned": false, 00:09:58.870 "supported_io_types": { 00:09:58.870 "read": true, 00:09:58.870 "write": true, 00:09:58.870 "unmap": true, 00:09:58.870 "flush": true, 00:09:58.870 "reset": true, 00:09:58.870 "nvme_admin": false, 00:09:58.870 "nvme_io": false, 00:09:58.870 "nvme_io_md": false, 00:09:58.870 "write_zeroes": true, 00:09:58.870 "zcopy": false, 00:09:58.870 "get_zone_info": false, 00:09:58.870 "zone_management": false, 00:09:58.870 "zone_append": false, 00:09:58.870 "compare": false, 00:09:58.870 "compare_and_write": false, 00:09:58.870 "abort": false, 00:09:58.870 "seek_hole": false, 00:09:58.870 "seek_data": false, 00:09:58.870 "copy": false, 00:09:58.870 "nvme_iov_md": false 00:09:58.870 }, 00:09:58.870 "memory_domains": [ 00:09:58.870 { 00:09:58.870 "dma_device_id": "system", 00:09:58.870 "dma_device_type": 1 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.870 "dma_device_type": 2 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "dma_device_id": "system", 00:09:58.870 "dma_device_type": 1 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.870 "dma_device_type": 2 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "dma_device_id": "system", 00:09:58.870 "dma_device_type": 1 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.870 "dma_device_type": 2 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "dma_device_id": "system", 00:09:58.870 "dma_device_type": 1 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.870 "dma_device_type": 2 00:09:58.870 } 00:09:58.870 ], 00:09:58.870 "driver_specific": { 00:09:58.870 "raid": { 00:09:58.870 "uuid": "173e0e00-f432-4439-a3f6-a24bb58cc74b", 00:09:58.870 "strip_size_kb": 64, 00:09:58.870 "state": "online", 00:09:58.870 "raid_level": "raid0", 00:09:58.870 "superblock": true, 00:09:58.870 "num_base_bdevs": 4, 00:09:58.870 "num_base_bdevs_discovered": 4, 00:09:58.870 "num_base_bdevs_operational": 4, 00:09:58.870 "base_bdevs_list": [ 00:09:58.870 { 00:09:58.870 "name": "NewBaseBdev", 00:09:58.870 "uuid": "5cb3c17d-4725-4bf7-84d1-766d64a2cbf4", 00:09:58.870 "is_configured": true, 00:09:58.870 "data_offset": 2048, 00:09:58.870 "data_size": 63488 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "name": "BaseBdev2", 00:09:58.870 "uuid": "a022d0fe-46a5-4640-93c0-6d8611f87c5a", 00:09:58.870 "is_configured": true, 00:09:58.870 "data_offset": 2048, 00:09:58.870 "data_size": 63488 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "name": "BaseBdev3", 00:09:58.870 "uuid": "6e99129c-3e98-4978-98e4-a3b3b6e85678", 00:09:58.870 "is_configured": true, 00:09:58.870 "data_offset": 2048, 00:09:58.870 "data_size": 63488 00:09:58.870 }, 00:09:58.870 { 00:09:58.870 "name": "BaseBdev4", 00:09:58.870 "uuid": "22f445f0-c087-43f6-ad9c-ce4257a3651c", 00:09:58.870 "is_configured": true, 00:09:58.870 "data_offset": 2048, 00:09:58.870 "data_size": 63488 00:09:58.870 } 00:09:58.870 ] 00:09:58.870 } 00:09:58.870 } 00:09:58.870 }' 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:58.870 BaseBdev2 00:09:58.870 BaseBdev3 00:09:58.870 BaseBdev4' 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:58.870 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.871 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.130 [2024-10-09 01:29:57.848816] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.130 [2024-10-09 01:29:57.848879] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.130 [2024-10-09 01:29:57.848976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.130 [2024-10-09 01:29:57.849063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.130 [2024-10-09 01:29:57.849114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82127 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82127 ']' 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82127 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82127 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.130 killing process with pid 82127 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82127' 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82127 00:09:59.130 [2024-10-09 01:29:57.899418] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.130 01:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82127 00:09:59.130 [2024-10-09 01:29:57.974746] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.699 01:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:59.699 00:09:59.699 real 0m10.027s 00:09:59.699 user 0m16.785s 00:09:59.699 sys 0m2.252s 00:09:59.699 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.699 ************************************ 00:09:59.699 END TEST raid_state_function_test_sb 00:09:59.699 ************************************ 00:09:59.699 01:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.699 01:29:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:59.699 01:29:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:59.699 01:29:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.699 01:29:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.699 ************************************ 00:09:59.699 START TEST raid_superblock_test 00:09:59.699 ************************************ 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82781 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82781 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 82781 ']' 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.699 01:29:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.699 [2024-10-09 01:29:58.513342] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:59.699 [2024-10-09 01:29:58.513563] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82781 ] 00:09:59.959 [2024-10-09 01:29:58.645803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:59.959 [2024-10-09 01:29:58.674902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.959 [2024-10-09 01:29:58.742848] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.959 [2024-10-09 01:29:58.818333] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.959 [2024-10-09 01:29:58.818481] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.527 malloc1 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.527 [2024-10-09 01:29:59.361631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.527 [2024-10-09 01:29:59.361750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.527 [2024-10-09 01:29:59.361809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:00.527 [2024-10-09 01:29:59.361846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.527 [2024-10-09 01:29:59.364265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.527 [2024-10-09 01:29:59.364336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.527 pt1 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.527 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.528 malloc2 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.528 [2024-10-09 01:29:59.409242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.528 [2024-10-09 01:29:59.409338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.528 [2024-10-09 01:29:59.409374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:00.528 [2024-10-09 01:29:59.409402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.528 [2024-10-09 01:29:59.411785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.528 [2024-10-09 01:29:59.411851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.528 pt2 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.528 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.787 malloc3 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.787 [2024-10-09 01:29:59.443787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:00.787 [2024-10-09 01:29:59.443871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.787 [2024-10-09 01:29:59.443908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:00.787 [2024-10-09 01:29:59.443934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.787 [2024-10-09 01:29:59.446322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.787 [2024-10-09 01:29:59.446389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:00.787 pt3 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.787 malloc4 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.787 [2024-10-09 01:29:59.482194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:00.787 [2024-10-09 01:29:59.482276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.787 [2024-10-09 01:29:59.482312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:00.787 [2024-10-09 01:29:59.482338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.787 [2024-10-09 01:29:59.484654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.787 [2024-10-09 01:29:59.484718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:00.787 pt4 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.787 [2024-10-09 01:29:59.494221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:00.787 [2024-10-09 01:29:59.496246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.787 [2024-10-09 01:29:59.496347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:00.787 [2024-10-09 01:29:59.496450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:00.787 [2024-10-09 01:29:59.496655] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:00.787 [2024-10-09 01:29:59.496702] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:00.787 [2024-10-09 01:29:59.496978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:00.787 [2024-10-09 01:29:59.497170] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:00.787 [2024-10-09 01:29:59.497213] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:00.787 [2024-10-09 01:29:59.497358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.787 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.787 "name": "raid_bdev1", 00:10:00.787 "uuid": "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c", 00:10:00.787 "strip_size_kb": 64, 00:10:00.787 "state": "online", 00:10:00.787 "raid_level": "raid0", 00:10:00.787 "superblock": true, 00:10:00.787 "num_base_bdevs": 4, 00:10:00.787 "num_base_bdevs_discovered": 4, 00:10:00.787 "num_base_bdevs_operational": 4, 00:10:00.787 "base_bdevs_list": [ 00:10:00.787 { 00:10:00.787 "name": "pt1", 00:10:00.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:00.787 "is_configured": true, 00:10:00.787 "data_offset": 2048, 00:10:00.787 "data_size": 63488 00:10:00.787 }, 00:10:00.787 { 00:10:00.787 "name": "pt2", 00:10:00.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.787 "is_configured": true, 00:10:00.787 "data_offset": 2048, 00:10:00.787 "data_size": 63488 00:10:00.788 }, 00:10:00.788 { 00:10:00.788 "name": "pt3", 00:10:00.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.788 "is_configured": true, 00:10:00.788 "data_offset": 2048, 00:10:00.788 "data_size": 63488 00:10:00.788 }, 00:10:00.788 { 00:10:00.788 "name": "pt4", 00:10:00.788 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:00.788 "is_configured": true, 00:10:00.788 "data_offset": 2048, 00:10:00.788 "data_size": 63488 00:10:00.788 } 00:10:00.788 ] 00:10:00.788 }' 00:10:00.788 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.788 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.047 [2024-10-09 01:29:59.922612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.047 01:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.307 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.307 "name": "raid_bdev1", 00:10:01.307 "aliases": [ 00:10:01.307 "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c" 00:10:01.307 ], 00:10:01.307 "product_name": "Raid Volume", 00:10:01.307 "block_size": 512, 00:10:01.307 "num_blocks": 253952, 00:10:01.307 "uuid": "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c", 00:10:01.307 "assigned_rate_limits": { 00:10:01.307 "rw_ios_per_sec": 0, 00:10:01.307 "rw_mbytes_per_sec": 0, 00:10:01.307 "r_mbytes_per_sec": 0, 00:10:01.307 "w_mbytes_per_sec": 0 00:10:01.307 }, 00:10:01.307 "claimed": false, 00:10:01.307 "zoned": false, 00:10:01.307 "supported_io_types": { 00:10:01.307 "read": true, 00:10:01.307 "write": true, 00:10:01.307 "unmap": true, 00:10:01.307 "flush": true, 00:10:01.307 "reset": true, 00:10:01.307 "nvme_admin": false, 00:10:01.307 "nvme_io": false, 00:10:01.307 "nvme_io_md": false, 00:10:01.307 "write_zeroes": true, 00:10:01.307 "zcopy": false, 00:10:01.307 "get_zone_info": false, 00:10:01.307 "zone_management": false, 00:10:01.307 "zone_append": false, 00:10:01.307 "compare": false, 00:10:01.307 "compare_and_write": false, 00:10:01.307 "abort": false, 00:10:01.307 "seek_hole": false, 00:10:01.307 "seek_data": false, 00:10:01.307 "copy": false, 00:10:01.307 "nvme_iov_md": false 00:10:01.307 }, 00:10:01.307 "memory_domains": [ 00:10:01.307 { 00:10:01.307 "dma_device_id": "system", 00:10:01.307 "dma_device_type": 1 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.307 "dma_device_type": 2 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "dma_device_id": "system", 00:10:01.307 "dma_device_type": 1 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.307 "dma_device_type": 2 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "dma_device_id": "system", 00:10:01.307 "dma_device_type": 1 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.307 "dma_device_type": 2 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "dma_device_id": "system", 00:10:01.307 "dma_device_type": 1 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.307 "dma_device_type": 2 00:10:01.307 } 00:10:01.307 ], 00:10:01.307 "driver_specific": { 00:10:01.307 "raid": { 00:10:01.307 "uuid": "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c", 00:10:01.307 "strip_size_kb": 64, 00:10:01.307 "state": "online", 00:10:01.307 "raid_level": "raid0", 00:10:01.307 "superblock": true, 00:10:01.307 "num_base_bdevs": 4, 00:10:01.307 "num_base_bdevs_discovered": 4, 00:10:01.307 "num_base_bdevs_operational": 4, 00:10:01.307 "base_bdevs_list": [ 00:10:01.307 { 00:10:01.307 "name": "pt1", 00:10:01.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.307 "is_configured": true, 00:10:01.307 "data_offset": 2048, 00:10:01.307 "data_size": 63488 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "name": "pt2", 00:10:01.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.307 "is_configured": true, 00:10:01.307 "data_offset": 2048, 00:10:01.307 "data_size": 63488 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "name": "pt3", 00:10:01.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.307 "is_configured": true, 00:10:01.307 "data_offset": 2048, 00:10:01.307 "data_size": 63488 00:10:01.307 }, 00:10:01.307 { 00:10:01.307 "name": "pt4", 00:10:01.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:01.307 "is_configured": true, 00:10:01.307 "data_offset": 2048, 00:10:01.307 "data_size": 63488 00:10:01.307 } 00:10:01.307 ] 00:10:01.307 } 00:10:01.307 } 00:10:01.307 }' 00:10:01.307 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.307 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:01.307 pt2 00:10:01.307 pt3 00:10:01.307 pt4' 00:10:01.307 01:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:01.307 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.308 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.308 [2024-10-09 01:30:00.198665] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c ']' 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 [2024-10-09 01:30:00.234366] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.565 [2024-10-09 01:30:00.234389] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.565 [2024-10-09 01:30:00.234464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.565 [2024-10-09 01:30:00.234552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.565 [2024-10-09 01:30:00.234566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:01.565 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.566 [2024-10-09 01:30:00.394433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:01.566 [2024-10-09 01:30:00.396645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:01.566 [2024-10-09 01:30:00.396728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:01.566 [2024-10-09 01:30:00.396776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:01.566 [2024-10-09 01:30:00.396843] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:01.566 [2024-10-09 01:30:00.396914] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:01.566 [2024-10-09 01:30:00.396984] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:01.566 [2024-10-09 01:30:00.397034] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:01.566 [2024-10-09 01:30:00.397107] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.566 [2024-10-09 01:30:00.397137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:01.566 request: 00:10:01.566 { 00:10:01.566 "name": "raid_bdev1", 00:10:01.566 "raid_level": "raid0", 00:10:01.566 "base_bdevs": [ 00:10:01.566 "malloc1", 00:10:01.566 "malloc2", 00:10:01.566 "malloc3", 00:10:01.566 "malloc4" 00:10:01.566 ], 00:10:01.566 "strip_size_kb": 64, 00:10:01.566 "superblock": false, 00:10:01.566 "method": "bdev_raid_create", 00:10:01.566 "req_id": 1 00:10:01.566 } 00:10:01.566 Got JSON-RPC error response 00:10:01.566 response: 00:10:01.566 { 00:10:01.566 "code": -17, 00:10:01.566 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:01.566 } 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.566 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.825 [2024-10-09 01:30:00.458431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:01.825 [2024-10-09 01:30:00.458529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.825 [2024-10-09 01:30:00.458564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:01.825 [2024-10-09 01:30:00.458596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.825 [2024-10-09 01:30:00.461072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.825 [2024-10-09 01:30:00.461145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:01.825 [2024-10-09 01:30:00.461233] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:01.825 [2024-10-09 01:30:00.461301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:01.825 pt1 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.825 "name": "raid_bdev1", 00:10:01.825 "uuid": "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c", 00:10:01.825 "strip_size_kb": 64, 00:10:01.825 "state": "configuring", 00:10:01.825 "raid_level": "raid0", 00:10:01.825 "superblock": true, 00:10:01.825 "num_base_bdevs": 4, 00:10:01.825 "num_base_bdevs_discovered": 1, 00:10:01.825 "num_base_bdevs_operational": 4, 00:10:01.825 "base_bdevs_list": [ 00:10:01.825 { 00:10:01.825 "name": "pt1", 00:10:01.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.825 "is_configured": true, 00:10:01.825 "data_offset": 2048, 00:10:01.825 "data_size": 63488 00:10:01.825 }, 00:10:01.825 { 00:10:01.825 "name": null, 00:10:01.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.825 "is_configured": false, 00:10:01.825 "data_offset": 2048, 00:10:01.825 "data_size": 63488 00:10:01.825 }, 00:10:01.825 { 00:10:01.825 "name": null, 00:10:01.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.825 "is_configured": false, 00:10:01.825 "data_offset": 2048, 00:10:01.825 "data_size": 63488 00:10:01.825 }, 00:10:01.825 { 00:10:01.825 "name": null, 00:10:01.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:01.825 "is_configured": false, 00:10:01.825 "data_offset": 2048, 00:10:01.825 "data_size": 63488 00:10:01.825 } 00:10:01.825 ] 00:10:01.825 }' 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.825 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.085 [2024-10-09 01:30:00.890530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.085 [2024-10-09 01:30:00.890624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.085 [2024-10-09 01:30:00.890658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:02.085 [2024-10-09 01:30:00.890689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.085 [2024-10-09 01:30:00.891097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.085 [2024-10-09 01:30:00.891152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.085 [2024-10-09 01:30:00.891241] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:02.085 [2024-10-09 01:30:00.891295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.085 pt2 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.085 [2024-10-09 01:30:00.902562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.085 "name": "raid_bdev1", 00:10:02.085 "uuid": "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c", 00:10:02.085 "strip_size_kb": 64, 00:10:02.085 "state": "configuring", 00:10:02.085 "raid_level": "raid0", 00:10:02.085 "superblock": true, 00:10:02.085 "num_base_bdevs": 4, 00:10:02.085 "num_base_bdevs_discovered": 1, 00:10:02.085 "num_base_bdevs_operational": 4, 00:10:02.085 "base_bdevs_list": [ 00:10:02.085 { 00:10:02.085 "name": "pt1", 00:10:02.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.085 "is_configured": true, 00:10:02.085 "data_offset": 2048, 00:10:02.085 "data_size": 63488 00:10:02.085 }, 00:10:02.085 { 00:10:02.085 "name": null, 00:10:02.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.085 "is_configured": false, 00:10:02.085 "data_offset": 0, 00:10:02.085 "data_size": 63488 00:10:02.085 }, 00:10:02.085 { 00:10:02.085 "name": null, 00:10:02.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.085 "is_configured": false, 00:10:02.085 "data_offset": 2048, 00:10:02.085 "data_size": 63488 00:10:02.085 }, 00:10:02.085 { 00:10:02.085 "name": null, 00:10:02.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:02.085 "is_configured": false, 00:10:02.085 "data_offset": 2048, 00:10:02.085 "data_size": 63488 00:10:02.085 } 00:10:02.085 ] 00:10:02.085 }' 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.085 01:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.653 [2024-10-09 01:30:01.334746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.653 [2024-10-09 01:30:01.334859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.653 [2024-10-09 01:30:01.334900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:02.653 [2024-10-09 01:30:01.334929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.653 [2024-10-09 01:30:01.335414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.653 [2024-10-09 01:30:01.335468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.653 [2024-10-09 01:30:01.335590] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:02.653 [2024-10-09 01:30:01.335643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.653 pt2 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.653 [2024-10-09 01:30:01.346671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:02.653 [2024-10-09 01:30:01.346762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.653 [2024-10-09 01:30:01.346798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:02.653 [2024-10-09 01:30:01.346825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.653 [2024-10-09 01:30:01.347175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.653 [2024-10-09 01:30:01.347223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:02.653 [2024-10-09 01:30:01.347302] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:02.653 [2024-10-09 01:30:01.347347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:02.653 pt3 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.653 [2024-10-09 01:30:01.358653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:02.653 [2024-10-09 01:30:01.358728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.653 [2024-10-09 01:30:01.358760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:02.653 [2024-10-09 01:30:01.358785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.653 [2024-10-09 01:30:01.359125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.653 [2024-10-09 01:30:01.359174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:02.653 [2024-10-09 01:30:01.359255] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:02.653 [2024-10-09 01:30:01.359297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:02.653 [2024-10-09 01:30:01.359417] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:02.653 [2024-10-09 01:30:01.359451] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:02.653 [2024-10-09 01:30:01.359737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:02.653 [2024-10-09 01:30:01.359884] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:02.653 [2024-10-09 01:30:01.359929] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:02.653 [2024-10-09 01:30:01.360055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.653 pt4 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.653 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.653 "name": "raid_bdev1", 00:10:02.653 "uuid": "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c", 00:10:02.653 "strip_size_kb": 64, 00:10:02.653 "state": "online", 00:10:02.653 "raid_level": "raid0", 00:10:02.653 "superblock": true, 00:10:02.653 "num_base_bdevs": 4, 00:10:02.653 "num_base_bdevs_discovered": 4, 00:10:02.653 "num_base_bdevs_operational": 4, 00:10:02.653 "base_bdevs_list": [ 00:10:02.654 { 00:10:02.654 "name": "pt1", 00:10:02.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.654 "is_configured": true, 00:10:02.654 "data_offset": 2048, 00:10:02.654 "data_size": 63488 00:10:02.654 }, 00:10:02.654 { 00:10:02.654 "name": "pt2", 00:10:02.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.654 "is_configured": true, 00:10:02.654 "data_offset": 2048, 00:10:02.654 "data_size": 63488 00:10:02.654 }, 00:10:02.654 { 00:10:02.654 "name": "pt3", 00:10:02.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.654 "is_configured": true, 00:10:02.654 "data_offset": 2048, 00:10:02.654 "data_size": 63488 00:10:02.654 }, 00:10:02.654 { 00:10:02.654 "name": "pt4", 00:10:02.654 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:02.654 "is_configured": true, 00:10:02.654 "data_offset": 2048, 00:10:02.654 "data_size": 63488 00:10:02.654 } 00:10:02.654 ] 00:10:02.654 }' 00:10:02.654 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.654 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.912 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:02.912 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:02.912 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.912 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.912 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.913 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.913 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:02.913 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.172 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.172 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.172 [2024-10-09 01:30:01.811098] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.172 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.172 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.172 "name": "raid_bdev1", 00:10:03.172 "aliases": [ 00:10:03.172 "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c" 00:10:03.172 ], 00:10:03.172 "product_name": "Raid Volume", 00:10:03.172 "block_size": 512, 00:10:03.172 "num_blocks": 253952, 00:10:03.172 "uuid": "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c", 00:10:03.172 "assigned_rate_limits": { 00:10:03.172 "rw_ios_per_sec": 0, 00:10:03.172 "rw_mbytes_per_sec": 0, 00:10:03.172 "r_mbytes_per_sec": 0, 00:10:03.172 "w_mbytes_per_sec": 0 00:10:03.172 }, 00:10:03.172 "claimed": false, 00:10:03.172 "zoned": false, 00:10:03.172 "supported_io_types": { 00:10:03.172 "read": true, 00:10:03.172 "write": true, 00:10:03.172 "unmap": true, 00:10:03.172 "flush": true, 00:10:03.172 "reset": true, 00:10:03.172 "nvme_admin": false, 00:10:03.172 "nvme_io": false, 00:10:03.172 "nvme_io_md": false, 00:10:03.172 "write_zeroes": true, 00:10:03.172 "zcopy": false, 00:10:03.172 "get_zone_info": false, 00:10:03.172 "zone_management": false, 00:10:03.172 "zone_append": false, 00:10:03.172 "compare": false, 00:10:03.172 "compare_and_write": false, 00:10:03.172 "abort": false, 00:10:03.172 "seek_hole": false, 00:10:03.172 "seek_data": false, 00:10:03.172 "copy": false, 00:10:03.172 "nvme_iov_md": false 00:10:03.172 }, 00:10:03.172 "memory_domains": [ 00:10:03.172 { 00:10:03.172 "dma_device_id": "system", 00:10:03.172 "dma_device_type": 1 00:10:03.172 }, 00:10:03.172 { 00:10:03.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.172 "dma_device_type": 2 00:10:03.172 }, 00:10:03.172 { 00:10:03.172 "dma_device_id": "system", 00:10:03.172 "dma_device_type": 1 00:10:03.172 }, 00:10:03.172 { 00:10:03.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.172 "dma_device_type": 2 00:10:03.172 }, 00:10:03.172 { 00:10:03.172 "dma_device_id": "system", 00:10:03.172 "dma_device_type": 1 00:10:03.172 }, 00:10:03.172 { 00:10:03.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.172 "dma_device_type": 2 00:10:03.172 }, 00:10:03.172 { 00:10:03.172 "dma_device_id": "system", 00:10:03.172 "dma_device_type": 1 00:10:03.172 }, 00:10:03.172 { 00:10:03.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.172 "dma_device_type": 2 00:10:03.172 } 00:10:03.172 ], 00:10:03.172 "driver_specific": { 00:10:03.172 "raid": { 00:10:03.172 "uuid": "cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c", 00:10:03.172 "strip_size_kb": 64, 00:10:03.172 "state": "online", 00:10:03.172 "raid_level": "raid0", 00:10:03.172 "superblock": true, 00:10:03.172 "num_base_bdevs": 4, 00:10:03.172 "num_base_bdevs_discovered": 4, 00:10:03.172 "num_base_bdevs_operational": 4, 00:10:03.172 "base_bdevs_list": [ 00:10:03.172 { 00:10:03.172 "name": "pt1", 00:10:03.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.173 "is_configured": true, 00:10:03.173 "data_offset": 2048, 00:10:03.173 "data_size": 63488 00:10:03.173 }, 00:10:03.173 { 00:10:03.173 "name": "pt2", 00:10:03.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.173 "is_configured": true, 00:10:03.173 "data_offset": 2048, 00:10:03.173 "data_size": 63488 00:10:03.173 }, 00:10:03.173 { 00:10:03.173 "name": "pt3", 00:10:03.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.173 "is_configured": true, 00:10:03.173 "data_offset": 2048, 00:10:03.173 "data_size": 63488 00:10:03.173 }, 00:10:03.173 { 00:10:03.173 "name": "pt4", 00:10:03.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:03.173 "is_configured": true, 00:10:03.173 "data_offset": 2048, 00:10:03.173 "data_size": 63488 00:10:03.173 } 00:10:03.173 ] 00:10:03.173 } 00:10:03.173 } 00:10:03.173 }' 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:03.173 pt2 00:10:03.173 pt3 00:10:03.173 pt4' 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.173 01:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.173 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.438 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.438 [2024-10-09 01:30:02.159137] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c '!=' cad1e2e5-9e7e-4fbb-90a6-d4e153c6ab2c ']' 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82781 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 82781 ']' 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 82781 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82781 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82781' 00:10:03.439 killing process with pid 82781 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 82781 00:10:03.439 [2024-10-09 01:30:02.226517] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.439 [2024-10-09 01:30:02.226621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.439 [2024-10-09 01:30:02.226710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.439 [2024-10-09 01:30:02.226720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:03.439 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 82781 00:10:03.439 [2024-10-09 01:30:02.304760] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.038 01:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:04.038 00:10:04.038 real 0m4.254s 00:10:04.038 user 0m6.470s 00:10:04.038 sys 0m1.030s 00:10:04.038 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.038 ************************************ 00:10:04.038 END TEST raid_superblock_test 00:10:04.038 ************************************ 00:10:04.038 01:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.038 01:30:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:04.038 01:30:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:04.038 01:30:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.038 01:30:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.038 ************************************ 00:10:04.038 START TEST raid_read_error_test 00:10:04.038 ************************************ 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.l3vsT0NQCa 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83029 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83029 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83029 ']' 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.038 01:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.039 01:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.039 01:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.039 01:30:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.039 [2024-10-09 01:30:02.857169] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:04.039 [2024-10-09 01:30:02.857394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83029 ] 00:10:04.297 [2024-10-09 01:30:02.989106] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:04.297 [2024-10-09 01:30:03.017340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.297 [2024-10-09 01:30:03.086178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.297 [2024-10-09 01:30:03.161837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.298 [2024-10-09 01:30:03.161881] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.866 BaseBdev1_malloc 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.866 true 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.866 [2024-10-09 01:30:03.721110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.866 [2024-10-09 01:30:03.721212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.866 [2024-10-09 01:30:03.721258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:04.866 [2024-10-09 01:30:03.721293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.866 [2024-10-09 01:30:03.723725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.866 [2024-10-09 01:30:03.723794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.866 BaseBdev1 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.866 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 BaseBdev2_malloc 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 true 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 [2024-10-09 01:30:03.783721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.126 [2024-10-09 01:30:03.783857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.126 [2024-10-09 01:30:03.783912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:05.126 [2024-10-09 01:30:03.783966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.126 [2024-10-09 01:30:03.786818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.126 [2024-10-09 01:30:03.786903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.126 BaseBdev2 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 BaseBdev3_malloc 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 true 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 [2024-10-09 01:30:03.830434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:05.126 [2024-10-09 01:30:03.830530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.126 [2024-10-09 01:30:03.830565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:05.126 [2024-10-09 01:30:03.830595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.126 [2024-10-09 01:30:03.832969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.126 [2024-10-09 01:30:03.833040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:05.126 BaseBdev3 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 BaseBdev4_malloc 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 true 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 [2024-10-09 01:30:03.876990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:05.126 [2024-10-09 01:30:03.877088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.126 [2024-10-09 01:30:03.877123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:05.126 [2024-10-09 01:30:03.877154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.126 [2024-10-09 01:30:03.879482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.126 [2024-10-09 01:30:03.879574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:05.126 BaseBdev4 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.126 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.126 [2024-10-09 01:30:03.889063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.126 [2024-10-09 01:30:03.891173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.127 [2024-10-09 01:30:03.891281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.127 [2024-10-09 01:30:03.891376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.127 [2024-10-09 01:30:03.891621] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:05.127 [2024-10-09 01:30:03.891673] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:05.127 [2024-10-09 01:30:03.891948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:05.127 [2024-10-09 01:30:03.892120] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:05.127 [2024-10-09 01:30:03.892160] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:05.127 [2024-10-09 01:30:03.892331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.127 "name": "raid_bdev1", 00:10:05.127 "uuid": "95b65fb8-5477-40af-99fa-abc46b6163cc", 00:10:05.127 "strip_size_kb": 64, 00:10:05.127 "state": "online", 00:10:05.127 "raid_level": "raid0", 00:10:05.127 "superblock": true, 00:10:05.127 "num_base_bdevs": 4, 00:10:05.127 "num_base_bdevs_discovered": 4, 00:10:05.127 "num_base_bdevs_operational": 4, 00:10:05.127 "base_bdevs_list": [ 00:10:05.127 { 00:10:05.127 "name": "BaseBdev1", 00:10:05.127 "uuid": "fc7c3e7a-5401-5cb5-98b0-dbe45ded694d", 00:10:05.127 "is_configured": true, 00:10:05.127 "data_offset": 2048, 00:10:05.127 "data_size": 63488 00:10:05.127 }, 00:10:05.127 { 00:10:05.127 "name": "BaseBdev2", 00:10:05.127 "uuid": "7d91492a-3184-588b-aa02-192b785de122", 00:10:05.127 "is_configured": true, 00:10:05.127 "data_offset": 2048, 00:10:05.127 "data_size": 63488 00:10:05.127 }, 00:10:05.127 { 00:10:05.127 "name": "BaseBdev3", 00:10:05.127 "uuid": "5f9fc561-20cb-58cb-bdf0-30e4c6f727e0", 00:10:05.127 "is_configured": true, 00:10:05.127 "data_offset": 2048, 00:10:05.127 "data_size": 63488 00:10:05.127 }, 00:10:05.127 { 00:10:05.127 "name": "BaseBdev4", 00:10:05.127 "uuid": "7767938b-6b4e-5139-aae7-37d1e435eaac", 00:10:05.127 "is_configured": true, 00:10:05.127 "data_offset": 2048, 00:10:05.127 "data_size": 63488 00:10:05.127 } 00:10:05.127 ] 00:10:05.127 }' 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.127 01:30:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.695 01:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:05.695 01:30:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:05.695 [2024-10-09 01:30:04.449670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.633 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.633 "name": "raid_bdev1", 00:10:06.633 "uuid": "95b65fb8-5477-40af-99fa-abc46b6163cc", 00:10:06.633 "strip_size_kb": 64, 00:10:06.633 "state": "online", 00:10:06.633 "raid_level": "raid0", 00:10:06.633 "superblock": true, 00:10:06.633 "num_base_bdevs": 4, 00:10:06.633 "num_base_bdevs_discovered": 4, 00:10:06.633 "num_base_bdevs_operational": 4, 00:10:06.633 "base_bdevs_list": [ 00:10:06.633 { 00:10:06.633 "name": "BaseBdev1", 00:10:06.633 "uuid": "fc7c3e7a-5401-5cb5-98b0-dbe45ded694d", 00:10:06.633 "is_configured": true, 00:10:06.633 "data_offset": 2048, 00:10:06.633 "data_size": 63488 00:10:06.633 }, 00:10:06.633 { 00:10:06.633 "name": "BaseBdev2", 00:10:06.633 "uuid": "7d91492a-3184-588b-aa02-192b785de122", 00:10:06.633 "is_configured": true, 00:10:06.633 "data_offset": 2048, 00:10:06.633 "data_size": 63488 00:10:06.633 }, 00:10:06.633 { 00:10:06.633 "name": "BaseBdev3", 00:10:06.633 "uuid": "5f9fc561-20cb-58cb-bdf0-30e4c6f727e0", 00:10:06.633 "is_configured": true, 00:10:06.633 "data_offset": 2048, 00:10:06.633 "data_size": 63488 00:10:06.633 }, 00:10:06.633 { 00:10:06.633 "name": "BaseBdev4", 00:10:06.633 "uuid": "7767938b-6b4e-5139-aae7-37d1e435eaac", 00:10:06.633 "is_configured": true, 00:10:06.633 "data_offset": 2048, 00:10:06.633 "data_size": 63488 00:10:06.634 } 00:10:06.634 ] 00:10:06.634 }' 00:10:06.634 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.634 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.892 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.892 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.892 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.151 [2024-10-09 01:30:05.788675] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.151 [2024-10-09 01:30:05.788764] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.151 [2024-10-09 01:30:05.791286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.151 [2024-10-09 01:30:05.791388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.151 [2024-10-09 01:30:05.791457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.151 [2024-10-09 01:30:05.791515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:07.151 { 00:10:07.151 "results": [ 00:10:07.151 { 00:10:07.151 "job": "raid_bdev1", 00:10:07.151 "core_mask": "0x1", 00:10:07.151 "workload": "randrw", 00:10:07.151 "percentage": 50, 00:10:07.151 "status": "finished", 00:10:07.151 "queue_depth": 1, 00:10:07.151 "io_size": 131072, 00:10:07.151 "runtime": 1.336911, 00:10:07.151 "iops": 14891.791600188793, 00:10:07.151 "mibps": 1861.4739500235992, 00:10:07.151 "io_failed": 1, 00:10:07.151 "io_timeout": 0, 00:10:07.151 "avg_latency_us": 94.42671504030842, 00:10:07.151 "min_latency_us": 24.321450361718817, 00:10:07.151 "max_latency_us": 1406.6277346814259 00:10:07.151 } 00:10:07.151 ], 00:10:07.151 "core_count": 1 00:10:07.151 } 00:10:07.151 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.151 01:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83029 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83029 ']' 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83029 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83029 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.152 killing process with pid 83029 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83029' 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83029 00:10:07.152 [2024-10-09 01:30:05.832975] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.152 01:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83029 00:10:07.152 [2024-10-09 01:30:05.896753] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.l3vsT0NQCa 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:07.411 00:10:07.411 real 0m3.521s 00:10:07.411 user 0m4.281s 00:10:07.411 sys 0m0.643s 00:10:07.411 01:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.412 01:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.412 ************************************ 00:10:07.412 END TEST raid_read_error_test 00:10:07.412 ************************************ 00:10:07.671 01:30:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:07.671 01:30:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:07.671 01:30:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.671 01:30:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.671 ************************************ 00:10:07.671 START TEST raid_write_error_test 00:10:07.671 ************************************ 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.671 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.y8ko94sajz 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83164 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83164 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83164 ']' 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.672 01:30:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.672 [2024-10-09 01:30:06.451182] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:07.672 [2024-10-09 01:30:06.451297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83164 ] 00:10:07.931 [2024-10-09 01:30:06.581438] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:07.931 [2024-10-09 01:30:06.610551] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.931 [2024-10-09 01:30:06.679380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.931 [2024-10-09 01:30:06.755115] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.931 [2024-10-09 01:30:06.755156] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.499 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.499 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:08.499 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.499 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.500 BaseBdev1_malloc 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.500 true 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.500 [2024-10-09 01:30:07.313914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:08.500 [2024-10-09 01:30:07.314017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.500 [2024-10-09 01:30:07.314052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:08.500 [2024-10-09 01:30:07.314087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.500 [2024-10-09 01:30:07.316559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.500 [2024-10-09 01:30:07.316628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:08.500 BaseBdev1 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.500 BaseBdev2_malloc 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.500 true 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.500 [2024-10-09 01:30:07.371836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:08.500 [2024-10-09 01:30:07.371937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.500 [2024-10-09 01:30:07.371976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:08.500 [2024-10-09 01:30:07.372011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.500 [2024-10-09 01:30:07.374345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.500 [2024-10-09 01:30:07.374421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:08.500 BaseBdev2 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.500 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 BaseBdev3_malloc 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 true 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 [2024-10-09 01:30:07.418420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:08.760 [2024-10-09 01:30:07.418513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.760 [2024-10-09 01:30:07.418559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:08.760 [2024-10-09 01:30:07.418591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.760 [2024-10-09 01:30:07.420909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.760 [2024-10-09 01:30:07.420980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:08.760 BaseBdev3 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 BaseBdev4_malloc 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 true 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 [2024-10-09 01:30:07.464903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:08.760 [2024-10-09 01:30:07.464996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.760 [2024-10-09 01:30:07.465030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:08.760 [2024-10-09 01:30:07.465063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.760 [2024-10-09 01:30:07.467335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.760 [2024-10-09 01:30:07.467407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:08.760 BaseBdev4 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 [2024-10-09 01:30:07.476982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.760 [2024-10-09 01:30:07.479067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.760 [2024-10-09 01:30:07.479178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.760 [2024-10-09 01:30:07.479261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:08.760 [2024-10-09 01:30:07.479474] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:08.760 [2024-10-09 01:30:07.479534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:08.760 [2024-10-09 01:30:07.479786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:08.760 [2024-10-09 01:30:07.479965] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:08.760 [2024-10-09 01:30:07.480005] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:08.760 [2024-10-09 01:30:07.480174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.760 "name": "raid_bdev1", 00:10:08.760 "uuid": "8abfe6d3-bce8-4be5-8063-029411bf9e70", 00:10:08.760 "strip_size_kb": 64, 00:10:08.760 "state": "online", 00:10:08.760 "raid_level": "raid0", 00:10:08.760 "superblock": true, 00:10:08.760 "num_base_bdevs": 4, 00:10:08.760 "num_base_bdevs_discovered": 4, 00:10:08.760 "num_base_bdevs_operational": 4, 00:10:08.760 "base_bdevs_list": [ 00:10:08.760 { 00:10:08.760 "name": "BaseBdev1", 00:10:08.760 "uuid": "e1bcf48f-5114-5a81-a59f-0a5540e8d325", 00:10:08.760 "is_configured": true, 00:10:08.760 "data_offset": 2048, 00:10:08.760 "data_size": 63488 00:10:08.760 }, 00:10:08.760 { 00:10:08.760 "name": "BaseBdev2", 00:10:08.760 "uuid": "6d3cce18-0e08-55e6-b145-9b767281cca5", 00:10:08.760 "is_configured": true, 00:10:08.760 "data_offset": 2048, 00:10:08.760 "data_size": 63488 00:10:08.760 }, 00:10:08.760 { 00:10:08.760 "name": "BaseBdev3", 00:10:08.760 "uuid": "436c9668-6d25-583e-b32f-307a455ac20e", 00:10:08.760 "is_configured": true, 00:10:08.760 "data_offset": 2048, 00:10:08.760 "data_size": 63488 00:10:08.760 }, 00:10:08.760 { 00:10:08.760 "name": "BaseBdev4", 00:10:08.760 "uuid": "32ee1662-a6e9-5906-91b0-22598c87523f", 00:10:08.760 "is_configured": true, 00:10:08.760 "data_offset": 2048, 00:10:08.760 "data_size": 63488 00:10:08.760 } 00:10:08.760 ] 00:10:08.760 }' 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.760 01:30:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.020 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:09.020 01:30:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:09.280 [2024-10-09 01:30:07.997540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.220 "name": "raid_bdev1", 00:10:10.220 "uuid": "8abfe6d3-bce8-4be5-8063-029411bf9e70", 00:10:10.220 "strip_size_kb": 64, 00:10:10.220 "state": "online", 00:10:10.220 "raid_level": "raid0", 00:10:10.220 "superblock": true, 00:10:10.220 "num_base_bdevs": 4, 00:10:10.220 "num_base_bdevs_discovered": 4, 00:10:10.220 "num_base_bdevs_operational": 4, 00:10:10.220 "base_bdevs_list": [ 00:10:10.220 { 00:10:10.220 "name": "BaseBdev1", 00:10:10.220 "uuid": "e1bcf48f-5114-5a81-a59f-0a5540e8d325", 00:10:10.220 "is_configured": true, 00:10:10.220 "data_offset": 2048, 00:10:10.220 "data_size": 63488 00:10:10.220 }, 00:10:10.220 { 00:10:10.220 "name": "BaseBdev2", 00:10:10.220 "uuid": "6d3cce18-0e08-55e6-b145-9b767281cca5", 00:10:10.220 "is_configured": true, 00:10:10.220 "data_offset": 2048, 00:10:10.220 "data_size": 63488 00:10:10.220 }, 00:10:10.220 { 00:10:10.220 "name": "BaseBdev3", 00:10:10.220 "uuid": "436c9668-6d25-583e-b32f-307a455ac20e", 00:10:10.220 "is_configured": true, 00:10:10.220 "data_offset": 2048, 00:10:10.220 "data_size": 63488 00:10:10.220 }, 00:10:10.220 { 00:10:10.220 "name": "BaseBdev4", 00:10:10.220 "uuid": "32ee1662-a6e9-5906-91b0-22598c87523f", 00:10:10.220 "is_configured": true, 00:10:10.220 "data_offset": 2048, 00:10:10.220 "data_size": 63488 00:10:10.220 } 00:10:10.220 ] 00:10:10.220 }' 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.220 01:30:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.480 [2024-10-09 01:30:09.348280] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.480 [2024-10-09 01:30:09.348367] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.480 [2024-10-09 01:30:09.350898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.480 [2024-10-09 01:30:09.351005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.480 [2024-10-09 01:30:09.351073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.480 [2024-10-09 01:30:09.351151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:10.480 { 00:10:10.480 "results": [ 00:10:10.480 { 00:10:10.480 "job": "raid_bdev1", 00:10:10.480 "core_mask": "0x1", 00:10:10.480 "workload": "randrw", 00:10:10.480 "percentage": 50, 00:10:10.480 "status": "finished", 00:10:10.480 "queue_depth": 1, 00:10:10.480 "io_size": 131072, 00:10:10.480 "runtime": 1.34879, 00:10:10.480 "iops": 14814.759895906702, 00:10:10.480 "mibps": 1851.8449869883377, 00:10:10.480 "io_failed": 1, 00:10:10.480 "io_timeout": 0, 00:10:10.480 "avg_latency_us": 94.87953054352279, 00:10:10.480 "min_latency_us": 24.098317789592958, 00:10:10.480 "max_latency_us": 1370.9265231412883 00:10:10.480 } 00:10:10.480 ], 00:10:10.480 "core_count": 1 00:10:10.480 } 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83164 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83164 ']' 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83164 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.480 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83164 00:10:10.740 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.740 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.740 killing process with pid 83164 00:10:10.740 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83164' 00:10:10.740 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83164 00:10:10.740 [2024-10-09 01:30:09.398229] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.740 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83164 00:10:10.740 [2024-10-09 01:30:09.460740] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.y8ko94sajz 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:11.001 ************************************ 00:10:11.001 END TEST raid_write_error_test 00:10:11.001 ************************************ 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:11.001 00:10:11.001 real 0m3.490s 00:10:11.001 user 0m4.214s 00:10:11.001 sys 0m0.651s 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.001 01:30:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.261 01:30:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:11.261 01:30:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:11.261 01:30:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:11.261 01:30:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.261 01:30:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.261 ************************************ 00:10:11.261 START TEST raid_state_function_test 00:10:11.261 ************************************ 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:11.261 Process raid pid: 83296 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83296 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83296' 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83296 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83296 ']' 00:10:11.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.261 01:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.261 [2024-10-09 01:30:10.013692] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:11.261 [2024-10-09 01:30:10.013821] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.261 [2024-10-09 01:30:10.147104] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:11.521 [2024-10-09 01:30:10.177150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.521 [2024-10-09 01:30:10.247024] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.521 [2024-10-09 01:30:10.322677] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.521 [2024-10-09 01:30:10.322718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.091 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.091 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:12.091 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.091 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.091 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.091 [2024-10-09 01:30:10.850791] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.092 [2024-10-09 01:30:10.850899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.092 [2024-10-09 01:30:10.850932] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.092 [2024-10-09 01:30:10.850954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.092 [2024-10-09 01:30:10.850978] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.092 [2024-10-09 01:30:10.850997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.092 [2024-10-09 01:30:10.851017] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.092 [2024-10-09 01:30:10.851058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.092 "name": "Existed_Raid", 00:10:12.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.092 "strip_size_kb": 64, 00:10:12.092 "state": "configuring", 00:10:12.092 "raid_level": "concat", 00:10:12.092 "superblock": false, 00:10:12.092 "num_base_bdevs": 4, 00:10:12.092 "num_base_bdevs_discovered": 0, 00:10:12.092 "num_base_bdevs_operational": 4, 00:10:12.092 "base_bdevs_list": [ 00:10:12.092 { 00:10:12.092 "name": "BaseBdev1", 00:10:12.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.092 "is_configured": false, 00:10:12.092 "data_offset": 0, 00:10:12.092 "data_size": 0 00:10:12.092 }, 00:10:12.092 { 00:10:12.092 "name": "BaseBdev2", 00:10:12.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.092 "is_configured": false, 00:10:12.092 "data_offset": 0, 00:10:12.092 "data_size": 0 00:10:12.092 }, 00:10:12.092 { 00:10:12.092 "name": "BaseBdev3", 00:10:12.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.092 "is_configured": false, 00:10:12.092 "data_offset": 0, 00:10:12.092 "data_size": 0 00:10:12.092 }, 00:10:12.092 { 00:10:12.092 "name": "BaseBdev4", 00:10:12.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.092 "is_configured": false, 00:10:12.092 "data_offset": 0, 00:10:12.092 "data_size": 0 00:10:12.092 } 00:10:12.092 ] 00:10:12.092 }' 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.092 01:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.661 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.661 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.661 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.661 [2024-10-09 01:30:11.302803] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.661 [2024-10-09 01:30:11.302884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:12.661 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.661 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.661 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.661 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.661 [2024-10-09 01:30:11.314807] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.661 [2024-10-09 01:30:11.314879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.661 [2024-10-09 01:30:11.314907] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.661 [2024-10-09 01:30:11.314927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.661 [2024-10-09 01:30:11.314947] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.662 [2024-10-09 01:30:11.314966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.662 [2024-10-09 01:30:11.314985] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.662 [2024-10-09 01:30:11.315003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 [2024-10-09 01:30:11.341914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.662 BaseBdev1 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 [ 00:10:12.662 { 00:10:12.662 "name": "BaseBdev1", 00:10:12.662 "aliases": [ 00:10:12.662 "ae7bf1d9-3ac6-4aee-8729-0bb6fa59be3c" 00:10:12.662 ], 00:10:12.662 "product_name": "Malloc disk", 00:10:12.662 "block_size": 512, 00:10:12.662 "num_blocks": 65536, 00:10:12.662 "uuid": "ae7bf1d9-3ac6-4aee-8729-0bb6fa59be3c", 00:10:12.662 "assigned_rate_limits": { 00:10:12.662 "rw_ios_per_sec": 0, 00:10:12.662 "rw_mbytes_per_sec": 0, 00:10:12.662 "r_mbytes_per_sec": 0, 00:10:12.662 "w_mbytes_per_sec": 0 00:10:12.662 }, 00:10:12.662 "claimed": true, 00:10:12.662 "claim_type": "exclusive_write", 00:10:12.662 "zoned": false, 00:10:12.662 "supported_io_types": { 00:10:12.662 "read": true, 00:10:12.662 "write": true, 00:10:12.662 "unmap": true, 00:10:12.662 "flush": true, 00:10:12.662 "reset": true, 00:10:12.662 "nvme_admin": false, 00:10:12.662 "nvme_io": false, 00:10:12.662 "nvme_io_md": false, 00:10:12.662 "write_zeroes": true, 00:10:12.662 "zcopy": true, 00:10:12.662 "get_zone_info": false, 00:10:12.662 "zone_management": false, 00:10:12.662 "zone_append": false, 00:10:12.662 "compare": false, 00:10:12.662 "compare_and_write": false, 00:10:12.662 "abort": true, 00:10:12.662 "seek_hole": false, 00:10:12.662 "seek_data": false, 00:10:12.662 "copy": true, 00:10:12.662 "nvme_iov_md": false 00:10:12.662 }, 00:10:12.662 "memory_domains": [ 00:10:12.662 { 00:10:12.662 "dma_device_id": "system", 00:10:12.662 "dma_device_type": 1 00:10:12.662 }, 00:10:12.662 { 00:10:12.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.662 "dma_device_type": 2 00:10:12.662 } 00:10:12.662 ], 00:10:12.662 "driver_specific": {} 00:10:12.662 } 00:10:12.662 ] 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.662 "name": "Existed_Raid", 00:10:12.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.662 "strip_size_kb": 64, 00:10:12.662 "state": "configuring", 00:10:12.662 "raid_level": "concat", 00:10:12.662 "superblock": false, 00:10:12.662 "num_base_bdevs": 4, 00:10:12.662 "num_base_bdevs_discovered": 1, 00:10:12.662 "num_base_bdevs_operational": 4, 00:10:12.662 "base_bdevs_list": [ 00:10:12.662 { 00:10:12.662 "name": "BaseBdev1", 00:10:12.662 "uuid": "ae7bf1d9-3ac6-4aee-8729-0bb6fa59be3c", 00:10:12.662 "is_configured": true, 00:10:12.662 "data_offset": 0, 00:10:12.662 "data_size": 65536 00:10:12.662 }, 00:10:12.662 { 00:10:12.662 "name": "BaseBdev2", 00:10:12.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.662 "is_configured": false, 00:10:12.662 "data_offset": 0, 00:10:12.662 "data_size": 0 00:10:12.662 }, 00:10:12.662 { 00:10:12.662 "name": "BaseBdev3", 00:10:12.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.662 "is_configured": false, 00:10:12.662 "data_offset": 0, 00:10:12.662 "data_size": 0 00:10:12.662 }, 00:10:12.662 { 00:10:12.662 "name": "BaseBdev4", 00:10:12.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.662 "is_configured": false, 00:10:12.662 "data_offset": 0, 00:10:12.662 "data_size": 0 00:10:12.662 } 00:10:12.662 ] 00:10:12.662 }' 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.662 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.922 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:12.922 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.922 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.922 [2024-10-09 01:30:11.790064] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.922 [2024-10-09 01:30:11.790162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:12.922 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.922 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.923 [2024-10-09 01:30:11.802086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.923 [2024-10-09 01:30:11.804282] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.923 [2024-10-09 01:30:11.804353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.923 [2024-10-09 01:30:11.804384] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.923 [2024-10-09 01:30:11.804411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.923 [2024-10-09 01:30:11.804431] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.923 [2024-10-09 01:30:11.804449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.923 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.182 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.183 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.183 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.183 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.183 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.183 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.183 "name": "Existed_Raid", 00:10:13.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.183 "strip_size_kb": 64, 00:10:13.183 "state": "configuring", 00:10:13.183 "raid_level": "concat", 00:10:13.183 "superblock": false, 00:10:13.183 "num_base_bdevs": 4, 00:10:13.183 "num_base_bdevs_discovered": 1, 00:10:13.183 "num_base_bdevs_operational": 4, 00:10:13.183 "base_bdevs_list": [ 00:10:13.183 { 00:10:13.183 "name": "BaseBdev1", 00:10:13.183 "uuid": "ae7bf1d9-3ac6-4aee-8729-0bb6fa59be3c", 00:10:13.183 "is_configured": true, 00:10:13.183 "data_offset": 0, 00:10:13.183 "data_size": 65536 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "name": "BaseBdev2", 00:10:13.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.183 "is_configured": false, 00:10:13.183 "data_offset": 0, 00:10:13.183 "data_size": 0 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "name": "BaseBdev3", 00:10:13.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.183 "is_configured": false, 00:10:13.183 "data_offset": 0, 00:10:13.183 "data_size": 0 00:10:13.183 }, 00:10:13.183 { 00:10:13.183 "name": "BaseBdev4", 00:10:13.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.183 "is_configured": false, 00:10:13.183 "data_offset": 0, 00:10:13.183 "data_size": 0 00:10:13.183 } 00:10:13.183 ] 00:10:13.183 }' 00:10:13.183 01:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.183 01:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 [2024-10-09 01:30:12.223574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.443 BaseBdev2 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 [ 00:10:13.443 { 00:10:13.443 "name": "BaseBdev2", 00:10:13.443 "aliases": [ 00:10:13.443 "75705dec-fe86-4ef4-b702-f245cc2d36a6" 00:10:13.443 ], 00:10:13.443 "product_name": "Malloc disk", 00:10:13.443 "block_size": 512, 00:10:13.443 "num_blocks": 65536, 00:10:13.443 "uuid": "75705dec-fe86-4ef4-b702-f245cc2d36a6", 00:10:13.443 "assigned_rate_limits": { 00:10:13.443 "rw_ios_per_sec": 0, 00:10:13.443 "rw_mbytes_per_sec": 0, 00:10:13.443 "r_mbytes_per_sec": 0, 00:10:13.443 "w_mbytes_per_sec": 0 00:10:13.443 }, 00:10:13.443 "claimed": true, 00:10:13.443 "claim_type": "exclusive_write", 00:10:13.443 "zoned": false, 00:10:13.443 "supported_io_types": { 00:10:13.443 "read": true, 00:10:13.443 "write": true, 00:10:13.443 "unmap": true, 00:10:13.443 "flush": true, 00:10:13.443 "reset": true, 00:10:13.443 "nvme_admin": false, 00:10:13.443 "nvme_io": false, 00:10:13.443 "nvme_io_md": false, 00:10:13.443 "write_zeroes": true, 00:10:13.443 "zcopy": true, 00:10:13.443 "get_zone_info": false, 00:10:13.443 "zone_management": false, 00:10:13.443 "zone_append": false, 00:10:13.443 "compare": false, 00:10:13.443 "compare_and_write": false, 00:10:13.443 "abort": true, 00:10:13.443 "seek_hole": false, 00:10:13.443 "seek_data": false, 00:10:13.443 "copy": true, 00:10:13.443 "nvme_iov_md": false 00:10:13.443 }, 00:10:13.443 "memory_domains": [ 00:10:13.443 { 00:10:13.443 "dma_device_id": "system", 00:10:13.443 "dma_device_type": 1 00:10:13.443 }, 00:10:13.443 { 00:10:13.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.443 "dma_device_type": 2 00:10:13.443 } 00:10:13.443 ], 00:10:13.443 "driver_specific": {} 00:10:13.443 } 00:10:13.443 ] 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.443 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.443 "name": "Existed_Raid", 00:10:13.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.443 "strip_size_kb": 64, 00:10:13.443 "state": "configuring", 00:10:13.443 "raid_level": "concat", 00:10:13.443 "superblock": false, 00:10:13.443 "num_base_bdevs": 4, 00:10:13.443 "num_base_bdevs_discovered": 2, 00:10:13.443 "num_base_bdevs_operational": 4, 00:10:13.443 "base_bdevs_list": [ 00:10:13.443 { 00:10:13.443 "name": "BaseBdev1", 00:10:13.443 "uuid": "ae7bf1d9-3ac6-4aee-8729-0bb6fa59be3c", 00:10:13.443 "is_configured": true, 00:10:13.443 "data_offset": 0, 00:10:13.443 "data_size": 65536 00:10:13.443 }, 00:10:13.443 { 00:10:13.443 "name": "BaseBdev2", 00:10:13.443 "uuid": "75705dec-fe86-4ef4-b702-f245cc2d36a6", 00:10:13.444 "is_configured": true, 00:10:13.444 "data_offset": 0, 00:10:13.444 "data_size": 65536 00:10:13.444 }, 00:10:13.444 { 00:10:13.444 "name": "BaseBdev3", 00:10:13.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.444 "is_configured": false, 00:10:13.444 "data_offset": 0, 00:10:13.444 "data_size": 0 00:10:13.444 }, 00:10:13.444 { 00:10:13.444 "name": "BaseBdev4", 00:10:13.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.444 "is_configured": false, 00:10:13.444 "data_offset": 0, 00:10:13.444 "data_size": 0 00:10:13.444 } 00:10:13.444 ] 00:10:13.444 }' 00:10:13.444 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.444 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.013 [2024-10-09 01:30:12.684301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.013 BaseBdev3 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.013 [ 00:10:14.013 { 00:10:14.013 "name": "BaseBdev3", 00:10:14.013 "aliases": [ 00:10:14.013 "52b9b0ba-1701-49db-97bd-b680e478a90d" 00:10:14.013 ], 00:10:14.013 "product_name": "Malloc disk", 00:10:14.013 "block_size": 512, 00:10:14.013 "num_blocks": 65536, 00:10:14.013 "uuid": "52b9b0ba-1701-49db-97bd-b680e478a90d", 00:10:14.013 "assigned_rate_limits": { 00:10:14.013 "rw_ios_per_sec": 0, 00:10:14.013 "rw_mbytes_per_sec": 0, 00:10:14.013 "r_mbytes_per_sec": 0, 00:10:14.013 "w_mbytes_per_sec": 0 00:10:14.013 }, 00:10:14.013 "claimed": true, 00:10:14.013 "claim_type": "exclusive_write", 00:10:14.013 "zoned": false, 00:10:14.013 "supported_io_types": { 00:10:14.013 "read": true, 00:10:14.013 "write": true, 00:10:14.013 "unmap": true, 00:10:14.013 "flush": true, 00:10:14.013 "reset": true, 00:10:14.013 "nvme_admin": false, 00:10:14.013 "nvme_io": false, 00:10:14.013 "nvme_io_md": false, 00:10:14.013 "write_zeroes": true, 00:10:14.013 "zcopy": true, 00:10:14.013 "get_zone_info": false, 00:10:14.013 "zone_management": false, 00:10:14.013 "zone_append": false, 00:10:14.013 "compare": false, 00:10:14.013 "compare_and_write": false, 00:10:14.013 "abort": true, 00:10:14.013 "seek_hole": false, 00:10:14.013 "seek_data": false, 00:10:14.013 "copy": true, 00:10:14.013 "nvme_iov_md": false 00:10:14.013 }, 00:10:14.013 "memory_domains": [ 00:10:14.013 { 00:10:14.013 "dma_device_id": "system", 00:10:14.013 "dma_device_type": 1 00:10:14.013 }, 00:10:14.013 { 00:10:14.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.013 "dma_device_type": 2 00:10:14.013 } 00:10:14.013 ], 00:10:14.013 "driver_specific": {} 00:10:14.013 } 00:10:14.013 ] 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.013 "name": "Existed_Raid", 00:10:14.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.013 "strip_size_kb": 64, 00:10:14.013 "state": "configuring", 00:10:14.013 "raid_level": "concat", 00:10:14.013 "superblock": false, 00:10:14.013 "num_base_bdevs": 4, 00:10:14.013 "num_base_bdevs_discovered": 3, 00:10:14.013 "num_base_bdevs_operational": 4, 00:10:14.013 "base_bdevs_list": [ 00:10:14.013 { 00:10:14.013 "name": "BaseBdev1", 00:10:14.013 "uuid": "ae7bf1d9-3ac6-4aee-8729-0bb6fa59be3c", 00:10:14.013 "is_configured": true, 00:10:14.013 "data_offset": 0, 00:10:14.013 "data_size": 65536 00:10:14.013 }, 00:10:14.013 { 00:10:14.013 "name": "BaseBdev2", 00:10:14.013 "uuid": "75705dec-fe86-4ef4-b702-f245cc2d36a6", 00:10:14.013 "is_configured": true, 00:10:14.013 "data_offset": 0, 00:10:14.013 "data_size": 65536 00:10:14.013 }, 00:10:14.013 { 00:10:14.013 "name": "BaseBdev3", 00:10:14.013 "uuid": "52b9b0ba-1701-49db-97bd-b680e478a90d", 00:10:14.013 "is_configured": true, 00:10:14.013 "data_offset": 0, 00:10:14.013 "data_size": 65536 00:10:14.013 }, 00:10:14.013 { 00:10:14.013 "name": "BaseBdev4", 00:10:14.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.013 "is_configured": false, 00:10:14.013 "data_offset": 0, 00:10:14.013 "data_size": 0 00:10:14.013 } 00:10:14.013 ] 00:10:14.013 }' 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.013 01:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.583 [2024-10-09 01:30:13.185270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:14.583 [2024-10-09 01:30:13.185392] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:14.583 [2024-10-09 01:30:13.185425] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:14.583 [2024-10-09 01:30:13.185858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:14.583 [2024-10-09 01:30:13.186080] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:14.583 [2024-10-09 01:30:13.186121] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:14.583 [2024-10-09 01:30:13.186406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.583 BaseBdev4 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.583 [ 00:10:14.583 { 00:10:14.583 "name": "BaseBdev4", 00:10:14.583 "aliases": [ 00:10:14.583 "6fb4e8b6-1ce9-4bf8-822c-d294a44280af" 00:10:14.583 ], 00:10:14.583 "product_name": "Malloc disk", 00:10:14.583 "block_size": 512, 00:10:14.583 "num_blocks": 65536, 00:10:14.583 "uuid": "6fb4e8b6-1ce9-4bf8-822c-d294a44280af", 00:10:14.583 "assigned_rate_limits": { 00:10:14.583 "rw_ios_per_sec": 0, 00:10:14.583 "rw_mbytes_per_sec": 0, 00:10:14.583 "r_mbytes_per_sec": 0, 00:10:14.583 "w_mbytes_per_sec": 0 00:10:14.583 }, 00:10:14.583 "claimed": true, 00:10:14.583 "claim_type": "exclusive_write", 00:10:14.583 "zoned": false, 00:10:14.583 "supported_io_types": { 00:10:14.583 "read": true, 00:10:14.583 "write": true, 00:10:14.583 "unmap": true, 00:10:14.583 "flush": true, 00:10:14.583 "reset": true, 00:10:14.583 "nvme_admin": false, 00:10:14.583 "nvme_io": false, 00:10:14.583 "nvme_io_md": false, 00:10:14.583 "write_zeroes": true, 00:10:14.583 "zcopy": true, 00:10:14.583 "get_zone_info": false, 00:10:14.583 "zone_management": false, 00:10:14.583 "zone_append": false, 00:10:14.583 "compare": false, 00:10:14.583 "compare_and_write": false, 00:10:14.583 "abort": true, 00:10:14.583 "seek_hole": false, 00:10:14.583 "seek_data": false, 00:10:14.583 "copy": true, 00:10:14.583 "nvme_iov_md": false 00:10:14.583 }, 00:10:14.583 "memory_domains": [ 00:10:14.583 { 00:10:14.583 "dma_device_id": "system", 00:10:14.583 "dma_device_type": 1 00:10:14.583 }, 00:10:14.583 { 00:10:14.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.583 "dma_device_type": 2 00:10:14.583 } 00:10:14.583 ], 00:10:14.583 "driver_specific": {} 00:10:14.583 } 00:10:14.583 ] 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.583 "name": "Existed_Raid", 00:10:14.583 "uuid": "6643b89e-a4c9-4be8-a498-830891139925", 00:10:14.583 "strip_size_kb": 64, 00:10:14.583 "state": "online", 00:10:14.583 "raid_level": "concat", 00:10:14.583 "superblock": false, 00:10:14.583 "num_base_bdevs": 4, 00:10:14.583 "num_base_bdevs_discovered": 4, 00:10:14.583 "num_base_bdevs_operational": 4, 00:10:14.583 "base_bdevs_list": [ 00:10:14.583 { 00:10:14.583 "name": "BaseBdev1", 00:10:14.583 "uuid": "ae7bf1d9-3ac6-4aee-8729-0bb6fa59be3c", 00:10:14.583 "is_configured": true, 00:10:14.583 "data_offset": 0, 00:10:14.583 "data_size": 65536 00:10:14.583 }, 00:10:14.583 { 00:10:14.583 "name": "BaseBdev2", 00:10:14.583 "uuid": "75705dec-fe86-4ef4-b702-f245cc2d36a6", 00:10:14.583 "is_configured": true, 00:10:14.583 "data_offset": 0, 00:10:14.583 "data_size": 65536 00:10:14.583 }, 00:10:14.583 { 00:10:14.583 "name": "BaseBdev3", 00:10:14.583 "uuid": "52b9b0ba-1701-49db-97bd-b680e478a90d", 00:10:14.583 "is_configured": true, 00:10:14.583 "data_offset": 0, 00:10:14.583 "data_size": 65536 00:10:14.583 }, 00:10:14.583 { 00:10:14.583 "name": "BaseBdev4", 00:10:14.583 "uuid": "6fb4e8b6-1ce9-4bf8-822c-d294a44280af", 00:10:14.583 "is_configured": true, 00:10:14.583 "data_offset": 0, 00:10:14.583 "data_size": 65536 00:10:14.583 } 00:10:14.583 ] 00:10:14.583 }' 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.583 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.843 [2024-10-09 01:30:13.681745] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.843 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.843 "name": "Existed_Raid", 00:10:14.843 "aliases": [ 00:10:14.843 "6643b89e-a4c9-4be8-a498-830891139925" 00:10:14.843 ], 00:10:14.843 "product_name": "Raid Volume", 00:10:14.843 "block_size": 512, 00:10:14.843 "num_blocks": 262144, 00:10:14.843 "uuid": "6643b89e-a4c9-4be8-a498-830891139925", 00:10:14.843 "assigned_rate_limits": { 00:10:14.843 "rw_ios_per_sec": 0, 00:10:14.843 "rw_mbytes_per_sec": 0, 00:10:14.843 "r_mbytes_per_sec": 0, 00:10:14.843 "w_mbytes_per_sec": 0 00:10:14.843 }, 00:10:14.843 "claimed": false, 00:10:14.843 "zoned": false, 00:10:14.843 "supported_io_types": { 00:10:14.843 "read": true, 00:10:14.843 "write": true, 00:10:14.843 "unmap": true, 00:10:14.843 "flush": true, 00:10:14.843 "reset": true, 00:10:14.843 "nvme_admin": false, 00:10:14.843 "nvme_io": false, 00:10:14.843 "nvme_io_md": false, 00:10:14.843 "write_zeroes": true, 00:10:14.843 "zcopy": false, 00:10:14.843 "get_zone_info": false, 00:10:14.843 "zone_management": false, 00:10:14.843 "zone_append": false, 00:10:14.843 "compare": false, 00:10:14.843 "compare_and_write": false, 00:10:14.843 "abort": false, 00:10:14.843 "seek_hole": false, 00:10:14.843 "seek_data": false, 00:10:14.843 "copy": false, 00:10:14.843 "nvme_iov_md": false 00:10:14.843 }, 00:10:14.843 "memory_domains": [ 00:10:14.843 { 00:10:14.843 "dma_device_id": "system", 00:10:14.843 "dma_device_type": 1 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.843 "dma_device_type": 2 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "dma_device_id": "system", 00:10:14.843 "dma_device_type": 1 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.843 "dma_device_type": 2 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "dma_device_id": "system", 00:10:14.843 "dma_device_type": 1 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.843 "dma_device_type": 2 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "dma_device_id": "system", 00:10:14.843 "dma_device_type": 1 00:10:14.843 }, 00:10:14.843 { 00:10:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.843 "dma_device_type": 2 00:10:14.843 } 00:10:14.843 ], 00:10:14.843 "driver_specific": { 00:10:14.843 "raid": { 00:10:14.843 "uuid": "6643b89e-a4c9-4be8-a498-830891139925", 00:10:14.843 "strip_size_kb": 64, 00:10:14.843 "state": "online", 00:10:14.843 "raid_level": "concat", 00:10:14.843 "superblock": false, 00:10:14.843 "num_base_bdevs": 4, 00:10:14.843 "num_base_bdevs_discovered": 4, 00:10:14.843 "num_base_bdevs_operational": 4, 00:10:14.843 "base_bdevs_list": [ 00:10:14.843 { 00:10:14.843 "name": "BaseBdev1", 00:10:14.843 "uuid": "ae7bf1d9-3ac6-4aee-8729-0bb6fa59be3c", 00:10:14.843 "is_configured": true, 00:10:14.843 "data_offset": 0, 00:10:14.843 "data_size": 65536 00:10:14.844 }, 00:10:14.844 { 00:10:14.844 "name": "BaseBdev2", 00:10:14.844 "uuid": "75705dec-fe86-4ef4-b702-f245cc2d36a6", 00:10:14.844 "is_configured": true, 00:10:14.844 "data_offset": 0, 00:10:14.844 "data_size": 65536 00:10:14.844 }, 00:10:14.844 { 00:10:14.844 "name": "BaseBdev3", 00:10:14.844 "uuid": "52b9b0ba-1701-49db-97bd-b680e478a90d", 00:10:14.844 "is_configured": true, 00:10:14.844 "data_offset": 0, 00:10:14.844 "data_size": 65536 00:10:14.844 }, 00:10:14.844 { 00:10:14.844 "name": "BaseBdev4", 00:10:14.844 "uuid": "6fb4e8b6-1ce9-4bf8-822c-d294a44280af", 00:10:14.844 "is_configured": true, 00:10:14.844 "data_offset": 0, 00:10:14.844 "data_size": 65536 00:10:14.844 } 00:10:14.844 ] 00:10:14.844 } 00:10:14.844 } 00:10:14.844 }' 00:10:14.844 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.103 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.103 BaseBdev2 00:10:15.103 BaseBdev3 00:10:15.103 BaseBdev4' 00:10:15.103 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.103 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.103 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.103 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.104 01:30:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.364 [2024-10-09 01:30:14.009588] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.364 [2024-10-09 01:30:14.009659] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.364 [2024-10-09 01:30:14.009725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.364 "name": "Existed_Raid", 00:10:15.364 "uuid": "6643b89e-a4c9-4be8-a498-830891139925", 00:10:15.364 "strip_size_kb": 64, 00:10:15.364 "state": "offline", 00:10:15.364 "raid_level": "concat", 00:10:15.364 "superblock": false, 00:10:15.364 "num_base_bdevs": 4, 00:10:15.364 "num_base_bdevs_discovered": 3, 00:10:15.364 "num_base_bdevs_operational": 3, 00:10:15.364 "base_bdevs_list": [ 00:10:15.364 { 00:10:15.364 "name": null, 00:10:15.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.364 "is_configured": false, 00:10:15.364 "data_offset": 0, 00:10:15.364 "data_size": 65536 00:10:15.364 }, 00:10:15.364 { 00:10:15.364 "name": "BaseBdev2", 00:10:15.364 "uuid": "75705dec-fe86-4ef4-b702-f245cc2d36a6", 00:10:15.364 "is_configured": true, 00:10:15.364 "data_offset": 0, 00:10:15.364 "data_size": 65536 00:10:15.364 }, 00:10:15.364 { 00:10:15.364 "name": "BaseBdev3", 00:10:15.364 "uuid": "52b9b0ba-1701-49db-97bd-b680e478a90d", 00:10:15.364 "is_configured": true, 00:10:15.364 "data_offset": 0, 00:10:15.364 "data_size": 65536 00:10:15.364 }, 00:10:15.364 { 00:10:15.364 "name": "BaseBdev4", 00:10:15.364 "uuid": "6fb4e8b6-1ce9-4bf8-822c-d294a44280af", 00:10:15.364 "is_configured": true, 00:10:15.364 "data_offset": 0, 00:10:15.364 "data_size": 65536 00:10:15.364 } 00:10:15.364 ] 00:10:15.364 }' 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.364 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.624 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:15.624 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.624 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.624 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.624 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.624 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 [2024-10-09 01:30:14.549999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 [2024-10-09 01:30:14.626140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.884 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.885 [2024-10-09 01:30:14.705961] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:15.885 [2024-10-09 01:30:14.706063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.885 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 BaseBdev2 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 [ 00:10:16.146 { 00:10:16.146 "name": "BaseBdev2", 00:10:16.146 "aliases": [ 00:10:16.146 "c0b07dc8-3bad-4fae-afe5-b3980b58a313" 00:10:16.146 ], 00:10:16.146 "product_name": "Malloc disk", 00:10:16.146 "block_size": 512, 00:10:16.146 "num_blocks": 65536, 00:10:16.146 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:16.146 "assigned_rate_limits": { 00:10:16.146 "rw_ios_per_sec": 0, 00:10:16.146 "rw_mbytes_per_sec": 0, 00:10:16.146 "r_mbytes_per_sec": 0, 00:10:16.146 "w_mbytes_per_sec": 0 00:10:16.146 }, 00:10:16.146 "claimed": false, 00:10:16.146 "zoned": false, 00:10:16.146 "supported_io_types": { 00:10:16.146 "read": true, 00:10:16.146 "write": true, 00:10:16.146 "unmap": true, 00:10:16.146 "flush": true, 00:10:16.146 "reset": true, 00:10:16.146 "nvme_admin": false, 00:10:16.146 "nvme_io": false, 00:10:16.146 "nvme_io_md": false, 00:10:16.146 "write_zeroes": true, 00:10:16.146 "zcopy": true, 00:10:16.146 "get_zone_info": false, 00:10:16.146 "zone_management": false, 00:10:16.146 "zone_append": false, 00:10:16.146 "compare": false, 00:10:16.146 "compare_and_write": false, 00:10:16.146 "abort": true, 00:10:16.146 "seek_hole": false, 00:10:16.146 "seek_data": false, 00:10:16.146 "copy": true, 00:10:16.146 "nvme_iov_md": false 00:10:16.146 }, 00:10:16.146 "memory_domains": [ 00:10:16.146 { 00:10:16.146 "dma_device_id": "system", 00:10:16.146 "dma_device_type": 1 00:10:16.146 }, 00:10:16.146 { 00:10:16.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.146 "dma_device_type": 2 00:10:16.146 } 00:10:16.146 ], 00:10:16.146 "driver_specific": {} 00:10:16.146 } 00:10:16.146 ] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 BaseBdev3 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 [ 00:10:16.146 { 00:10:16.146 "name": "BaseBdev3", 00:10:16.146 "aliases": [ 00:10:16.146 "d28c902c-e717-466a-a44d-3f0e46fbddbc" 00:10:16.146 ], 00:10:16.146 "product_name": "Malloc disk", 00:10:16.146 "block_size": 512, 00:10:16.146 "num_blocks": 65536, 00:10:16.146 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:16.146 "assigned_rate_limits": { 00:10:16.146 "rw_ios_per_sec": 0, 00:10:16.146 "rw_mbytes_per_sec": 0, 00:10:16.146 "r_mbytes_per_sec": 0, 00:10:16.146 "w_mbytes_per_sec": 0 00:10:16.146 }, 00:10:16.146 "claimed": false, 00:10:16.146 "zoned": false, 00:10:16.146 "supported_io_types": { 00:10:16.146 "read": true, 00:10:16.146 "write": true, 00:10:16.146 "unmap": true, 00:10:16.146 "flush": true, 00:10:16.146 "reset": true, 00:10:16.146 "nvme_admin": false, 00:10:16.146 "nvme_io": false, 00:10:16.146 "nvme_io_md": false, 00:10:16.146 "write_zeroes": true, 00:10:16.146 "zcopy": true, 00:10:16.146 "get_zone_info": false, 00:10:16.146 "zone_management": false, 00:10:16.146 "zone_append": false, 00:10:16.146 "compare": false, 00:10:16.146 "compare_and_write": false, 00:10:16.146 "abort": true, 00:10:16.146 "seek_hole": false, 00:10:16.146 "seek_data": false, 00:10:16.146 "copy": true, 00:10:16.146 "nvme_iov_md": false 00:10:16.146 }, 00:10:16.146 "memory_domains": [ 00:10:16.146 { 00:10:16.146 "dma_device_id": "system", 00:10:16.146 "dma_device_type": 1 00:10:16.146 }, 00:10:16.146 { 00:10:16.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.146 "dma_device_type": 2 00:10:16.146 } 00:10:16.146 ], 00:10:16.146 "driver_specific": {} 00:10:16.146 } 00:10:16.146 ] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 BaseBdev4 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.146 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.147 [ 00:10:16.147 { 00:10:16.147 "name": "BaseBdev4", 00:10:16.147 "aliases": [ 00:10:16.147 "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce" 00:10:16.147 ], 00:10:16.147 "product_name": "Malloc disk", 00:10:16.147 "block_size": 512, 00:10:16.147 "num_blocks": 65536, 00:10:16.147 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:16.147 "assigned_rate_limits": { 00:10:16.147 "rw_ios_per_sec": 0, 00:10:16.147 "rw_mbytes_per_sec": 0, 00:10:16.147 "r_mbytes_per_sec": 0, 00:10:16.147 "w_mbytes_per_sec": 0 00:10:16.147 }, 00:10:16.147 "claimed": false, 00:10:16.147 "zoned": false, 00:10:16.147 "supported_io_types": { 00:10:16.147 "read": true, 00:10:16.147 "write": true, 00:10:16.147 "unmap": true, 00:10:16.147 "flush": true, 00:10:16.147 "reset": true, 00:10:16.147 "nvme_admin": false, 00:10:16.147 "nvme_io": false, 00:10:16.147 "nvme_io_md": false, 00:10:16.147 "write_zeroes": true, 00:10:16.147 "zcopy": true, 00:10:16.147 "get_zone_info": false, 00:10:16.147 "zone_management": false, 00:10:16.147 "zone_append": false, 00:10:16.147 "compare": false, 00:10:16.147 "compare_and_write": false, 00:10:16.147 "abort": true, 00:10:16.147 "seek_hole": false, 00:10:16.147 "seek_data": false, 00:10:16.147 "copy": true, 00:10:16.147 "nvme_iov_md": false 00:10:16.147 }, 00:10:16.147 "memory_domains": [ 00:10:16.147 { 00:10:16.147 "dma_device_id": "system", 00:10:16.147 "dma_device_type": 1 00:10:16.147 }, 00:10:16.147 { 00:10:16.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.147 "dma_device_type": 2 00:10:16.147 } 00:10:16.147 ], 00:10:16.147 "driver_specific": {} 00:10:16.147 } 00:10:16.147 ] 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.147 [2024-10-09 01:30:14.947668] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.147 [2024-10-09 01:30:14.947766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.147 [2024-10-09 01:30:14.947806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.147 [2024-10-09 01:30:14.949968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.147 [2024-10-09 01:30:14.950050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.147 01:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.147 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.147 "name": "Existed_Raid", 00:10:16.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.147 "strip_size_kb": 64, 00:10:16.147 "state": "configuring", 00:10:16.147 "raid_level": "concat", 00:10:16.147 "superblock": false, 00:10:16.147 "num_base_bdevs": 4, 00:10:16.147 "num_base_bdevs_discovered": 3, 00:10:16.147 "num_base_bdevs_operational": 4, 00:10:16.147 "base_bdevs_list": [ 00:10:16.147 { 00:10:16.147 "name": "BaseBdev1", 00:10:16.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.147 "is_configured": false, 00:10:16.147 "data_offset": 0, 00:10:16.147 "data_size": 0 00:10:16.147 }, 00:10:16.147 { 00:10:16.147 "name": "BaseBdev2", 00:10:16.147 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:16.147 "is_configured": true, 00:10:16.147 "data_offset": 0, 00:10:16.147 "data_size": 65536 00:10:16.147 }, 00:10:16.147 { 00:10:16.147 "name": "BaseBdev3", 00:10:16.147 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:16.147 "is_configured": true, 00:10:16.147 "data_offset": 0, 00:10:16.147 "data_size": 65536 00:10:16.147 }, 00:10:16.147 { 00:10:16.147 "name": "BaseBdev4", 00:10:16.147 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:16.147 "is_configured": true, 00:10:16.147 "data_offset": 0, 00:10:16.147 "data_size": 65536 00:10:16.147 } 00:10:16.147 ] 00:10:16.147 }' 00:10:16.147 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.147 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.716 [2024-10-09 01:30:15.371809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.716 "name": "Existed_Raid", 00:10:16.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.716 "strip_size_kb": 64, 00:10:16.716 "state": "configuring", 00:10:16.716 "raid_level": "concat", 00:10:16.716 "superblock": false, 00:10:16.716 "num_base_bdevs": 4, 00:10:16.716 "num_base_bdevs_discovered": 2, 00:10:16.716 "num_base_bdevs_operational": 4, 00:10:16.716 "base_bdevs_list": [ 00:10:16.716 { 00:10:16.716 "name": "BaseBdev1", 00:10:16.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.716 "is_configured": false, 00:10:16.716 "data_offset": 0, 00:10:16.716 "data_size": 0 00:10:16.716 }, 00:10:16.716 { 00:10:16.716 "name": null, 00:10:16.716 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:16.716 "is_configured": false, 00:10:16.716 "data_offset": 0, 00:10:16.716 "data_size": 65536 00:10:16.716 }, 00:10:16.716 { 00:10:16.716 "name": "BaseBdev3", 00:10:16.716 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:16.716 "is_configured": true, 00:10:16.716 "data_offset": 0, 00:10:16.716 "data_size": 65536 00:10:16.716 }, 00:10:16.716 { 00:10:16.716 "name": "BaseBdev4", 00:10:16.716 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:16.716 "is_configured": true, 00:10:16.716 "data_offset": 0, 00:10:16.716 "data_size": 65536 00:10:16.716 } 00:10:16.716 ] 00:10:16.716 }' 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.716 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.975 [2024-10-09 01:30:15.856819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.975 BaseBdev1 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.975 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.234 [ 00:10:17.234 { 00:10:17.234 "name": "BaseBdev1", 00:10:17.234 "aliases": [ 00:10:17.234 "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d" 00:10:17.234 ], 00:10:17.234 "product_name": "Malloc disk", 00:10:17.234 "block_size": 512, 00:10:17.234 "num_blocks": 65536, 00:10:17.234 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:17.234 "assigned_rate_limits": { 00:10:17.234 "rw_ios_per_sec": 0, 00:10:17.234 "rw_mbytes_per_sec": 0, 00:10:17.234 "r_mbytes_per_sec": 0, 00:10:17.234 "w_mbytes_per_sec": 0 00:10:17.234 }, 00:10:17.234 "claimed": true, 00:10:17.234 "claim_type": "exclusive_write", 00:10:17.234 "zoned": false, 00:10:17.234 "supported_io_types": { 00:10:17.234 "read": true, 00:10:17.234 "write": true, 00:10:17.234 "unmap": true, 00:10:17.234 "flush": true, 00:10:17.234 "reset": true, 00:10:17.234 "nvme_admin": false, 00:10:17.234 "nvme_io": false, 00:10:17.234 "nvme_io_md": false, 00:10:17.234 "write_zeroes": true, 00:10:17.234 "zcopy": true, 00:10:17.234 "get_zone_info": false, 00:10:17.234 "zone_management": false, 00:10:17.234 "zone_append": false, 00:10:17.234 "compare": false, 00:10:17.234 "compare_and_write": false, 00:10:17.234 "abort": true, 00:10:17.234 "seek_hole": false, 00:10:17.234 "seek_data": false, 00:10:17.234 "copy": true, 00:10:17.234 "nvme_iov_md": false 00:10:17.234 }, 00:10:17.234 "memory_domains": [ 00:10:17.234 { 00:10:17.234 "dma_device_id": "system", 00:10:17.234 "dma_device_type": 1 00:10:17.234 }, 00:10:17.234 { 00:10:17.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.234 "dma_device_type": 2 00:10:17.234 } 00:10:17.234 ], 00:10:17.234 "driver_specific": {} 00:10:17.234 } 00:10:17.234 ] 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.234 "name": "Existed_Raid", 00:10:17.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.234 "strip_size_kb": 64, 00:10:17.234 "state": "configuring", 00:10:17.234 "raid_level": "concat", 00:10:17.234 "superblock": false, 00:10:17.234 "num_base_bdevs": 4, 00:10:17.234 "num_base_bdevs_discovered": 3, 00:10:17.234 "num_base_bdevs_operational": 4, 00:10:17.234 "base_bdevs_list": [ 00:10:17.234 { 00:10:17.234 "name": "BaseBdev1", 00:10:17.234 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:17.234 "is_configured": true, 00:10:17.234 "data_offset": 0, 00:10:17.234 "data_size": 65536 00:10:17.234 }, 00:10:17.234 { 00:10:17.234 "name": null, 00:10:17.234 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:17.234 "is_configured": false, 00:10:17.234 "data_offset": 0, 00:10:17.234 "data_size": 65536 00:10:17.234 }, 00:10:17.234 { 00:10:17.234 "name": "BaseBdev3", 00:10:17.234 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:17.234 "is_configured": true, 00:10:17.234 "data_offset": 0, 00:10:17.234 "data_size": 65536 00:10:17.234 }, 00:10:17.234 { 00:10:17.234 "name": "BaseBdev4", 00:10:17.234 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:17.234 "is_configured": true, 00:10:17.234 "data_offset": 0, 00:10:17.234 "data_size": 65536 00:10:17.234 } 00:10:17.234 ] 00:10:17.234 }' 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.234 01:30:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.493 [2024-10-09 01:30:16.317026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.493 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.494 "name": "Existed_Raid", 00:10:17.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.494 "strip_size_kb": 64, 00:10:17.494 "state": "configuring", 00:10:17.494 "raid_level": "concat", 00:10:17.494 "superblock": false, 00:10:17.494 "num_base_bdevs": 4, 00:10:17.494 "num_base_bdevs_discovered": 2, 00:10:17.494 "num_base_bdevs_operational": 4, 00:10:17.494 "base_bdevs_list": [ 00:10:17.494 { 00:10:17.494 "name": "BaseBdev1", 00:10:17.494 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:17.494 "is_configured": true, 00:10:17.494 "data_offset": 0, 00:10:17.494 "data_size": 65536 00:10:17.494 }, 00:10:17.494 { 00:10:17.494 "name": null, 00:10:17.494 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:17.494 "is_configured": false, 00:10:17.494 "data_offset": 0, 00:10:17.494 "data_size": 65536 00:10:17.494 }, 00:10:17.494 { 00:10:17.494 "name": null, 00:10:17.494 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:17.494 "is_configured": false, 00:10:17.494 "data_offset": 0, 00:10:17.494 "data_size": 65536 00:10:17.494 }, 00:10:17.494 { 00:10:17.494 "name": "BaseBdev4", 00:10:17.494 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:17.494 "is_configured": true, 00:10:17.494 "data_offset": 0, 00:10:17.494 "data_size": 65536 00:10:17.494 } 00:10:17.494 ] 00:10:17.494 }' 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.494 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.061 [2024-10-09 01:30:16.829168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.061 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.061 "name": "Existed_Raid", 00:10:18.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.061 "strip_size_kb": 64, 00:10:18.061 "state": "configuring", 00:10:18.061 "raid_level": "concat", 00:10:18.061 "superblock": false, 00:10:18.061 "num_base_bdevs": 4, 00:10:18.061 "num_base_bdevs_discovered": 3, 00:10:18.061 "num_base_bdevs_operational": 4, 00:10:18.061 "base_bdevs_list": [ 00:10:18.061 { 00:10:18.061 "name": "BaseBdev1", 00:10:18.061 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:18.061 "is_configured": true, 00:10:18.061 "data_offset": 0, 00:10:18.061 "data_size": 65536 00:10:18.061 }, 00:10:18.061 { 00:10:18.061 "name": null, 00:10:18.061 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:18.061 "is_configured": false, 00:10:18.061 "data_offset": 0, 00:10:18.061 "data_size": 65536 00:10:18.061 }, 00:10:18.061 { 00:10:18.061 "name": "BaseBdev3", 00:10:18.061 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:18.062 "is_configured": true, 00:10:18.062 "data_offset": 0, 00:10:18.062 "data_size": 65536 00:10:18.062 }, 00:10:18.062 { 00:10:18.062 "name": "BaseBdev4", 00:10:18.062 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:18.062 "is_configured": true, 00:10:18.062 "data_offset": 0, 00:10:18.062 "data_size": 65536 00:10:18.062 } 00:10:18.062 ] 00:10:18.062 }' 00:10:18.062 01:30:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.062 01:30:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.630 [2024-10-09 01:30:17.305310] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.630 "name": "Existed_Raid", 00:10:18.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.630 "strip_size_kb": 64, 00:10:18.630 "state": "configuring", 00:10:18.630 "raid_level": "concat", 00:10:18.630 "superblock": false, 00:10:18.630 "num_base_bdevs": 4, 00:10:18.630 "num_base_bdevs_discovered": 2, 00:10:18.630 "num_base_bdevs_operational": 4, 00:10:18.630 "base_bdevs_list": [ 00:10:18.630 { 00:10:18.630 "name": null, 00:10:18.630 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:18.630 "is_configured": false, 00:10:18.630 "data_offset": 0, 00:10:18.630 "data_size": 65536 00:10:18.630 }, 00:10:18.630 { 00:10:18.630 "name": null, 00:10:18.630 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:18.630 "is_configured": false, 00:10:18.630 "data_offset": 0, 00:10:18.630 "data_size": 65536 00:10:18.630 }, 00:10:18.630 { 00:10:18.630 "name": "BaseBdev3", 00:10:18.630 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:18.630 "is_configured": true, 00:10:18.630 "data_offset": 0, 00:10:18.630 "data_size": 65536 00:10:18.630 }, 00:10:18.630 { 00:10:18.630 "name": "BaseBdev4", 00:10:18.630 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:18.630 "is_configured": true, 00:10:18.630 "data_offset": 0, 00:10:18.630 "data_size": 65536 00:10:18.630 } 00:10:18.630 ] 00:10:18.630 }' 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.630 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.890 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.149 [2024-10-09 01:30:17.784858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.149 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.149 "name": "Existed_Raid", 00:10:19.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.149 "strip_size_kb": 64, 00:10:19.149 "state": "configuring", 00:10:19.149 "raid_level": "concat", 00:10:19.149 "superblock": false, 00:10:19.149 "num_base_bdevs": 4, 00:10:19.149 "num_base_bdevs_discovered": 3, 00:10:19.149 "num_base_bdevs_operational": 4, 00:10:19.149 "base_bdevs_list": [ 00:10:19.149 { 00:10:19.149 "name": null, 00:10:19.149 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:19.149 "is_configured": false, 00:10:19.149 "data_offset": 0, 00:10:19.149 "data_size": 65536 00:10:19.149 }, 00:10:19.149 { 00:10:19.149 "name": "BaseBdev2", 00:10:19.149 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:19.149 "is_configured": true, 00:10:19.150 "data_offset": 0, 00:10:19.150 "data_size": 65536 00:10:19.150 }, 00:10:19.150 { 00:10:19.150 "name": "BaseBdev3", 00:10:19.150 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:19.150 "is_configured": true, 00:10:19.150 "data_offset": 0, 00:10:19.150 "data_size": 65536 00:10:19.150 }, 00:10:19.150 { 00:10:19.150 "name": "BaseBdev4", 00:10:19.150 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:19.150 "is_configured": true, 00:10:19.150 "data_offset": 0, 00:10:19.150 "data_size": 65536 00:10:19.150 } 00:10:19.150 ] 00:10:19.150 }' 00:10:19.150 01:30:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.150 01:30:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.409 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.669 [2024-10-09 01:30:18.337753] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:19.669 [2024-10-09 01:30:18.337863] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.669 [2024-10-09 01:30:18.337891] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:19.669 [2024-10-09 01:30:18.338187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:10:19.669 [2024-10-09 01:30:18.338361] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.669 [2024-10-09 01:30:18.338401] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:19.669 [2024-10-09 01:30:18.338643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.669 NewBaseBdev 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.669 [ 00:10:19.669 { 00:10:19.669 "name": "NewBaseBdev", 00:10:19.669 "aliases": [ 00:10:19.669 "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d" 00:10:19.669 ], 00:10:19.669 "product_name": "Malloc disk", 00:10:19.669 "block_size": 512, 00:10:19.669 "num_blocks": 65536, 00:10:19.669 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:19.669 "assigned_rate_limits": { 00:10:19.669 "rw_ios_per_sec": 0, 00:10:19.669 "rw_mbytes_per_sec": 0, 00:10:19.669 "r_mbytes_per_sec": 0, 00:10:19.669 "w_mbytes_per_sec": 0 00:10:19.669 }, 00:10:19.669 "claimed": true, 00:10:19.669 "claim_type": "exclusive_write", 00:10:19.669 "zoned": false, 00:10:19.669 "supported_io_types": { 00:10:19.669 "read": true, 00:10:19.669 "write": true, 00:10:19.669 "unmap": true, 00:10:19.669 "flush": true, 00:10:19.669 "reset": true, 00:10:19.669 "nvme_admin": false, 00:10:19.669 "nvme_io": false, 00:10:19.669 "nvme_io_md": false, 00:10:19.669 "write_zeroes": true, 00:10:19.669 "zcopy": true, 00:10:19.669 "get_zone_info": false, 00:10:19.669 "zone_management": false, 00:10:19.669 "zone_append": false, 00:10:19.669 "compare": false, 00:10:19.669 "compare_and_write": false, 00:10:19.669 "abort": true, 00:10:19.669 "seek_hole": false, 00:10:19.669 "seek_data": false, 00:10:19.669 "copy": true, 00:10:19.669 "nvme_iov_md": false 00:10:19.669 }, 00:10:19.669 "memory_domains": [ 00:10:19.669 { 00:10:19.669 "dma_device_id": "system", 00:10:19.669 "dma_device_type": 1 00:10:19.669 }, 00:10:19.669 { 00:10:19.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.669 "dma_device_type": 2 00:10:19.669 } 00:10:19.669 ], 00:10:19.669 "driver_specific": {} 00:10:19.669 } 00:10:19.669 ] 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.669 "name": "Existed_Raid", 00:10:19.669 "uuid": "0d643d6e-b114-4355-9dec-bab5cf058d44", 00:10:19.669 "strip_size_kb": 64, 00:10:19.669 "state": "online", 00:10:19.669 "raid_level": "concat", 00:10:19.669 "superblock": false, 00:10:19.669 "num_base_bdevs": 4, 00:10:19.669 "num_base_bdevs_discovered": 4, 00:10:19.669 "num_base_bdevs_operational": 4, 00:10:19.669 "base_bdevs_list": [ 00:10:19.669 { 00:10:19.669 "name": "NewBaseBdev", 00:10:19.669 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:19.669 "is_configured": true, 00:10:19.669 "data_offset": 0, 00:10:19.669 "data_size": 65536 00:10:19.669 }, 00:10:19.669 { 00:10:19.669 "name": "BaseBdev2", 00:10:19.669 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:19.669 "is_configured": true, 00:10:19.669 "data_offset": 0, 00:10:19.669 "data_size": 65536 00:10:19.669 }, 00:10:19.669 { 00:10:19.669 "name": "BaseBdev3", 00:10:19.669 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:19.669 "is_configured": true, 00:10:19.669 "data_offset": 0, 00:10:19.669 "data_size": 65536 00:10:19.669 }, 00:10:19.669 { 00:10:19.669 "name": "BaseBdev4", 00:10:19.669 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:19.669 "is_configured": true, 00:10:19.669 "data_offset": 0, 00:10:19.669 "data_size": 65536 00:10:19.669 } 00:10:19.669 ] 00:10:19.669 }' 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.669 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.929 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.189 [2024-10-09 01:30:18.822281] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.189 "name": "Existed_Raid", 00:10:20.189 "aliases": [ 00:10:20.189 "0d643d6e-b114-4355-9dec-bab5cf058d44" 00:10:20.189 ], 00:10:20.189 "product_name": "Raid Volume", 00:10:20.189 "block_size": 512, 00:10:20.189 "num_blocks": 262144, 00:10:20.189 "uuid": "0d643d6e-b114-4355-9dec-bab5cf058d44", 00:10:20.189 "assigned_rate_limits": { 00:10:20.189 "rw_ios_per_sec": 0, 00:10:20.189 "rw_mbytes_per_sec": 0, 00:10:20.189 "r_mbytes_per_sec": 0, 00:10:20.189 "w_mbytes_per_sec": 0 00:10:20.189 }, 00:10:20.189 "claimed": false, 00:10:20.189 "zoned": false, 00:10:20.189 "supported_io_types": { 00:10:20.189 "read": true, 00:10:20.189 "write": true, 00:10:20.189 "unmap": true, 00:10:20.189 "flush": true, 00:10:20.189 "reset": true, 00:10:20.189 "nvme_admin": false, 00:10:20.189 "nvme_io": false, 00:10:20.189 "nvme_io_md": false, 00:10:20.189 "write_zeroes": true, 00:10:20.189 "zcopy": false, 00:10:20.189 "get_zone_info": false, 00:10:20.189 "zone_management": false, 00:10:20.189 "zone_append": false, 00:10:20.189 "compare": false, 00:10:20.189 "compare_and_write": false, 00:10:20.189 "abort": false, 00:10:20.189 "seek_hole": false, 00:10:20.189 "seek_data": false, 00:10:20.189 "copy": false, 00:10:20.189 "nvme_iov_md": false 00:10:20.189 }, 00:10:20.189 "memory_domains": [ 00:10:20.189 { 00:10:20.189 "dma_device_id": "system", 00:10:20.189 "dma_device_type": 1 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.189 "dma_device_type": 2 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "dma_device_id": "system", 00:10:20.189 "dma_device_type": 1 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.189 "dma_device_type": 2 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "dma_device_id": "system", 00:10:20.189 "dma_device_type": 1 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.189 "dma_device_type": 2 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "dma_device_id": "system", 00:10:20.189 "dma_device_type": 1 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.189 "dma_device_type": 2 00:10:20.189 } 00:10:20.189 ], 00:10:20.189 "driver_specific": { 00:10:20.189 "raid": { 00:10:20.189 "uuid": "0d643d6e-b114-4355-9dec-bab5cf058d44", 00:10:20.189 "strip_size_kb": 64, 00:10:20.189 "state": "online", 00:10:20.189 "raid_level": "concat", 00:10:20.189 "superblock": false, 00:10:20.189 "num_base_bdevs": 4, 00:10:20.189 "num_base_bdevs_discovered": 4, 00:10:20.189 "num_base_bdevs_operational": 4, 00:10:20.189 "base_bdevs_list": [ 00:10:20.189 { 00:10:20.189 "name": "NewBaseBdev", 00:10:20.189 "uuid": "f4ca15ee-3ab5-4a5d-bc2b-a2e50236329d", 00:10:20.189 "is_configured": true, 00:10:20.189 "data_offset": 0, 00:10:20.189 "data_size": 65536 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "name": "BaseBdev2", 00:10:20.189 "uuid": "c0b07dc8-3bad-4fae-afe5-b3980b58a313", 00:10:20.189 "is_configured": true, 00:10:20.189 "data_offset": 0, 00:10:20.189 "data_size": 65536 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "name": "BaseBdev3", 00:10:20.189 "uuid": "d28c902c-e717-466a-a44d-3f0e46fbddbc", 00:10:20.189 "is_configured": true, 00:10:20.189 "data_offset": 0, 00:10:20.189 "data_size": 65536 00:10:20.189 }, 00:10:20.189 { 00:10:20.189 "name": "BaseBdev4", 00:10:20.189 "uuid": "13d8075c-8c0a-4ecc-9b5f-14de82eae4ce", 00:10:20.189 "is_configured": true, 00:10:20.189 "data_offset": 0, 00:10:20.189 "data_size": 65536 00:10:20.189 } 00:10:20.189 ] 00:10:20.189 } 00:10:20.189 } 00:10:20.189 }' 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:20.189 BaseBdev2 00:10:20.189 BaseBdev3 00:10:20.189 BaseBdev4' 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.189 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.190 01:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.190 01:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.190 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.453 [2024-10-09 01:30:19.166023] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:20.453 [2024-10-09 01:30:19.166088] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.453 [2024-10-09 01:30:19.166172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.453 [2024-10-09 01:30:19.166259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.453 [2024-10-09 01:30:19.166275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83296 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83296 ']' 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83296 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83296 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.453 killing process with pid 83296 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83296' 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83296 00:10:20.453 [2024-10-09 01:30:19.215613] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.453 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83296 00:10:20.453 [2024-10-09 01:30:19.290016] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.022 00:10:21.022 real 0m9.744s 00:10:21.022 user 0m16.311s 00:10:21.022 sys 0m2.157s 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.022 ************************************ 00:10:21.022 END TEST raid_state_function_test 00:10:21.022 ************************************ 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.022 01:30:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:21.022 01:30:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:21.022 01:30:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.022 01:30:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.022 ************************************ 00:10:21.022 START TEST raid_state_function_test_sb 00:10:21.022 ************************************ 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:21.022 Process raid pid: 83951 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83951 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83951' 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83951 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83951 ']' 00:10:21.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.022 01:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.023 01:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.023 [2024-10-09 01:30:19.830759] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:21.023 [2024-10-09 01:30:19.830991] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.282 [2024-10-09 01:30:19.968443] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:21.282 [2024-10-09 01:30:19.996652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.282 [2024-10-09 01:30:20.064546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.282 [2024-10-09 01:30:20.140384] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.282 [2024-10-09 01:30:20.140447] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.851 [2024-10-09 01:30:20.669032] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.851 [2024-10-09 01:30:20.669138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.851 [2024-10-09 01:30:20.669176] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.851 [2024-10-09 01:30:20.669228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.851 [2024-10-09 01:30:20.669260] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.851 [2024-10-09 01:30:20.669285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.851 [2024-10-09 01:30:20.669316] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:21.851 [2024-10-09 01:30:20.669338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.851 "name": "Existed_Raid", 00:10:21.851 "uuid": "c31424b0-6a2c-4522-867b-b438882293b7", 00:10:21.851 "strip_size_kb": 64, 00:10:21.851 "state": "configuring", 00:10:21.851 "raid_level": "concat", 00:10:21.851 "superblock": true, 00:10:21.851 "num_base_bdevs": 4, 00:10:21.851 "num_base_bdevs_discovered": 0, 00:10:21.851 "num_base_bdevs_operational": 4, 00:10:21.851 "base_bdevs_list": [ 00:10:21.851 { 00:10:21.851 "name": "BaseBdev1", 00:10:21.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.851 "is_configured": false, 00:10:21.851 "data_offset": 0, 00:10:21.851 "data_size": 0 00:10:21.851 }, 00:10:21.851 { 00:10:21.851 "name": "BaseBdev2", 00:10:21.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.851 "is_configured": false, 00:10:21.851 "data_offset": 0, 00:10:21.851 "data_size": 0 00:10:21.851 }, 00:10:21.851 { 00:10:21.851 "name": "BaseBdev3", 00:10:21.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.851 "is_configured": false, 00:10:21.851 "data_offset": 0, 00:10:21.851 "data_size": 0 00:10:21.851 }, 00:10:21.851 { 00:10:21.851 "name": "BaseBdev4", 00:10:21.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.851 "is_configured": false, 00:10:21.851 "data_offset": 0, 00:10:21.851 "data_size": 0 00:10:21.851 } 00:10:21.851 ] 00:10:21.851 }' 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.851 01:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.420 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.420 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 [2024-10-09 01:30:21.153036] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.420 [2024-10-09 01:30:21.153080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:22.420 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.420 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.420 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.420 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 [2024-10-09 01:30:21.165052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.420 [2024-10-09 01:30:21.165129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.420 [2024-10-09 01:30:21.165160] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.420 [2024-10-09 01:30:21.165181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.420 [2024-10-09 01:30:21.165200] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.421 [2024-10-09 01:30:21.165219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.421 [2024-10-09 01:30:21.165238] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.421 [2024-10-09 01:30:21.165256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.421 [2024-10-09 01:30:21.191929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.421 BaseBdev1 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.421 [ 00:10:22.421 { 00:10:22.421 "name": "BaseBdev1", 00:10:22.421 "aliases": [ 00:10:22.421 "679ab0cb-a496-4224-a028-7637dc39ccf6" 00:10:22.421 ], 00:10:22.421 "product_name": "Malloc disk", 00:10:22.421 "block_size": 512, 00:10:22.421 "num_blocks": 65536, 00:10:22.421 "uuid": "679ab0cb-a496-4224-a028-7637dc39ccf6", 00:10:22.421 "assigned_rate_limits": { 00:10:22.421 "rw_ios_per_sec": 0, 00:10:22.421 "rw_mbytes_per_sec": 0, 00:10:22.421 "r_mbytes_per_sec": 0, 00:10:22.421 "w_mbytes_per_sec": 0 00:10:22.421 }, 00:10:22.421 "claimed": true, 00:10:22.421 "claim_type": "exclusive_write", 00:10:22.421 "zoned": false, 00:10:22.421 "supported_io_types": { 00:10:22.421 "read": true, 00:10:22.421 "write": true, 00:10:22.421 "unmap": true, 00:10:22.421 "flush": true, 00:10:22.421 "reset": true, 00:10:22.421 "nvme_admin": false, 00:10:22.421 "nvme_io": false, 00:10:22.421 "nvme_io_md": false, 00:10:22.421 "write_zeroes": true, 00:10:22.421 "zcopy": true, 00:10:22.421 "get_zone_info": false, 00:10:22.421 "zone_management": false, 00:10:22.421 "zone_append": false, 00:10:22.421 "compare": false, 00:10:22.421 "compare_and_write": false, 00:10:22.421 "abort": true, 00:10:22.421 "seek_hole": false, 00:10:22.421 "seek_data": false, 00:10:22.421 "copy": true, 00:10:22.421 "nvme_iov_md": false 00:10:22.421 }, 00:10:22.421 "memory_domains": [ 00:10:22.421 { 00:10:22.421 "dma_device_id": "system", 00:10:22.421 "dma_device_type": 1 00:10:22.421 }, 00:10:22.421 { 00:10:22.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.421 "dma_device_type": 2 00:10:22.421 } 00:10:22.421 ], 00:10:22.421 "driver_specific": {} 00:10:22.421 } 00:10:22.421 ] 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.421 "name": "Existed_Raid", 00:10:22.421 "uuid": "55f45275-2171-4c5b-a098-582f78c49f78", 00:10:22.421 "strip_size_kb": 64, 00:10:22.421 "state": "configuring", 00:10:22.421 "raid_level": "concat", 00:10:22.421 "superblock": true, 00:10:22.421 "num_base_bdevs": 4, 00:10:22.421 "num_base_bdevs_discovered": 1, 00:10:22.421 "num_base_bdevs_operational": 4, 00:10:22.421 "base_bdevs_list": [ 00:10:22.421 { 00:10:22.421 "name": "BaseBdev1", 00:10:22.421 "uuid": "679ab0cb-a496-4224-a028-7637dc39ccf6", 00:10:22.421 "is_configured": true, 00:10:22.421 "data_offset": 2048, 00:10:22.421 "data_size": 63488 00:10:22.421 }, 00:10:22.421 { 00:10:22.421 "name": "BaseBdev2", 00:10:22.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.421 "is_configured": false, 00:10:22.421 "data_offset": 0, 00:10:22.421 "data_size": 0 00:10:22.421 }, 00:10:22.421 { 00:10:22.421 "name": "BaseBdev3", 00:10:22.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.421 "is_configured": false, 00:10:22.421 "data_offset": 0, 00:10:22.421 "data_size": 0 00:10:22.421 }, 00:10:22.421 { 00:10:22.421 "name": "BaseBdev4", 00:10:22.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.421 "is_configured": false, 00:10:22.421 "data_offset": 0, 00:10:22.421 "data_size": 0 00:10:22.421 } 00:10:22.421 ] 00:10:22.421 }' 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.421 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.989 [2024-10-09 01:30:21.652060] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.989 [2024-10-09 01:30:21.652159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.989 [2024-10-09 01:30:21.664103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.989 [2024-10-09 01:30:21.666251] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.989 [2024-10-09 01:30:21.666328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.989 [2024-10-09 01:30:21.666344] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:22.989 [2024-10-09 01:30:21.666351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:22.989 [2024-10-09 01:30:21.666359] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:22.989 [2024-10-09 01:30:21.666367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:22.989 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.990 "name": "Existed_Raid", 00:10:22.990 "uuid": "a65f7ce1-b2b5-4557-914e-f373f8736306", 00:10:22.990 "strip_size_kb": 64, 00:10:22.990 "state": "configuring", 00:10:22.990 "raid_level": "concat", 00:10:22.990 "superblock": true, 00:10:22.990 "num_base_bdevs": 4, 00:10:22.990 "num_base_bdevs_discovered": 1, 00:10:22.990 "num_base_bdevs_operational": 4, 00:10:22.990 "base_bdevs_list": [ 00:10:22.990 { 00:10:22.990 "name": "BaseBdev1", 00:10:22.990 "uuid": "679ab0cb-a496-4224-a028-7637dc39ccf6", 00:10:22.990 "is_configured": true, 00:10:22.990 "data_offset": 2048, 00:10:22.990 "data_size": 63488 00:10:22.990 }, 00:10:22.990 { 00:10:22.990 "name": "BaseBdev2", 00:10:22.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.990 "is_configured": false, 00:10:22.990 "data_offset": 0, 00:10:22.990 "data_size": 0 00:10:22.990 }, 00:10:22.990 { 00:10:22.990 "name": "BaseBdev3", 00:10:22.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.990 "is_configured": false, 00:10:22.990 "data_offset": 0, 00:10:22.990 "data_size": 0 00:10:22.990 }, 00:10:22.990 { 00:10:22.990 "name": "BaseBdev4", 00:10:22.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.990 "is_configured": false, 00:10:22.990 "data_offset": 0, 00:10:22.990 "data_size": 0 00:10:22.990 } 00:10:22.990 ] 00:10:22.990 }' 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.990 01:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.250 [2024-10-09 01:30:22.061995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.250 BaseBdev2 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.250 [ 00:10:23.250 { 00:10:23.250 "name": "BaseBdev2", 00:10:23.250 "aliases": [ 00:10:23.250 "039b3f87-30fa-4c3e-9a28-3b7ec4e515d5" 00:10:23.250 ], 00:10:23.250 "product_name": "Malloc disk", 00:10:23.250 "block_size": 512, 00:10:23.250 "num_blocks": 65536, 00:10:23.250 "uuid": "039b3f87-30fa-4c3e-9a28-3b7ec4e515d5", 00:10:23.250 "assigned_rate_limits": { 00:10:23.250 "rw_ios_per_sec": 0, 00:10:23.250 "rw_mbytes_per_sec": 0, 00:10:23.250 "r_mbytes_per_sec": 0, 00:10:23.250 "w_mbytes_per_sec": 0 00:10:23.250 }, 00:10:23.250 "claimed": true, 00:10:23.250 "claim_type": "exclusive_write", 00:10:23.250 "zoned": false, 00:10:23.250 "supported_io_types": { 00:10:23.250 "read": true, 00:10:23.250 "write": true, 00:10:23.250 "unmap": true, 00:10:23.250 "flush": true, 00:10:23.250 "reset": true, 00:10:23.250 "nvme_admin": false, 00:10:23.250 "nvme_io": false, 00:10:23.250 "nvme_io_md": false, 00:10:23.250 "write_zeroes": true, 00:10:23.250 "zcopy": true, 00:10:23.250 "get_zone_info": false, 00:10:23.250 "zone_management": false, 00:10:23.250 "zone_append": false, 00:10:23.250 "compare": false, 00:10:23.250 "compare_and_write": false, 00:10:23.250 "abort": true, 00:10:23.250 "seek_hole": false, 00:10:23.250 "seek_data": false, 00:10:23.250 "copy": true, 00:10:23.250 "nvme_iov_md": false 00:10:23.250 }, 00:10:23.250 "memory_domains": [ 00:10:23.250 { 00:10:23.250 "dma_device_id": "system", 00:10:23.250 "dma_device_type": 1 00:10:23.250 }, 00:10:23.250 { 00:10:23.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.250 "dma_device_type": 2 00:10:23.250 } 00:10:23.250 ], 00:10:23.250 "driver_specific": {} 00:10:23.250 } 00:10:23.250 ] 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.250 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.509 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.509 "name": "Existed_Raid", 00:10:23.509 "uuid": "a65f7ce1-b2b5-4557-914e-f373f8736306", 00:10:23.509 "strip_size_kb": 64, 00:10:23.509 "state": "configuring", 00:10:23.509 "raid_level": "concat", 00:10:23.509 "superblock": true, 00:10:23.509 "num_base_bdevs": 4, 00:10:23.509 "num_base_bdevs_discovered": 2, 00:10:23.509 "num_base_bdevs_operational": 4, 00:10:23.509 "base_bdevs_list": [ 00:10:23.509 { 00:10:23.509 "name": "BaseBdev1", 00:10:23.509 "uuid": "679ab0cb-a496-4224-a028-7637dc39ccf6", 00:10:23.509 "is_configured": true, 00:10:23.510 "data_offset": 2048, 00:10:23.510 "data_size": 63488 00:10:23.510 }, 00:10:23.510 { 00:10:23.510 "name": "BaseBdev2", 00:10:23.510 "uuid": "039b3f87-30fa-4c3e-9a28-3b7ec4e515d5", 00:10:23.510 "is_configured": true, 00:10:23.510 "data_offset": 2048, 00:10:23.510 "data_size": 63488 00:10:23.510 }, 00:10:23.510 { 00:10:23.510 "name": "BaseBdev3", 00:10:23.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.510 "is_configured": false, 00:10:23.510 "data_offset": 0, 00:10:23.510 "data_size": 0 00:10:23.510 }, 00:10:23.510 { 00:10:23.510 "name": "BaseBdev4", 00:10:23.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.510 "is_configured": false, 00:10:23.510 "data_offset": 0, 00:10:23.510 "data_size": 0 00:10:23.510 } 00:10:23.510 ] 00:10:23.510 }' 00:10:23.510 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.510 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.769 [2024-10-09 01:30:22.526788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.769 BaseBdev3 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.769 [ 00:10:23.769 { 00:10:23.769 "name": "BaseBdev3", 00:10:23.769 "aliases": [ 00:10:23.769 "131452d3-95ff-40d6-91ad-9f09095c8ddc" 00:10:23.769 ], 00:10:23.769 "product_name": "Malloc disk", 00:10:23.769 "block_size": 512, 00:10:23.769 "num_blocks": 65536, 00:10:23.769 "uuid": "131452d3-95ff-40d6-91ad-9f09095c8ddc", 00:10:23.769 "assigned_rate_limits": { 00:10:23.769 "rw_ios_per_sec": 0, 00:10:23.769 "rw_mbytes_per_sec": 0, 00:10:23.769 "r_mbytes_per_sec": 0, 00:10:23.769 "w_mbytes_per_sec": 0 00:10:23.769 }, 00:10:23.769 "claimed": true, 00:10:23.769 "claim_type": "exclusive_write", 00:10:23.769 "zoned": false, 00:10:23.769 "supported_io_types": { 00:10:23.769 "read": true, 00:10:23.769 "write": true, 00:10:23.769 "unmap": true, 00:10:23.769 "flush": true, 00:10:23.769 "reset": true, 00:10:23.769 "nvme_admin": false, 00:10:23.769 "nvme_io": false, 00:10:23.769 "nvme_io_md": false, 00:10:23.769 "write_zeroes": true, 00:10:23.769 "zcopy": true, 00:10:23.769 "get_zone_info": false, 00:10:23.769 "zone_management": false, 00:10:23.769 "zone_append": false, 00:10:23.769 "compare": false, 00:10:23.769 "compare_and_write": false, 00:10:23.769 "abort": true, 00:10:23.769 "seek_hole": false, 00:10:23.769 "seek_data": false, 00:10:23.769 "copy": true, 00:10:23.769 "nvme_iov_md": false 00:10:23.769 }, 00:10:23.769 "memory_domains": [ 00:10:23.769 { 00:10:23.769 "dma_device_id": "system", 00:10:23.769 "dma_device_type": 1 00:10:23.769 }, 00:10:23.769 { 00:10:23.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.769 "dma_device_type": 2 00:10:23.769 } 00:10:23.769 ], 00:10:23.769 "driver_specific": {} 00:10:23.769 } 00:10:23.769 ] 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.769 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.769 "name": "Existed_Raid", 00:10:23.769 "uuid": "a65f7ce1-b2b5-4557-914e-f373f8736306", 00:10:23.769 "strip_size_kb": 64, 00:10:23.769 "state": "configuring", 00:10:23.769 "raid_level": "concat", 00:10:23.769 "superblock": true, 00:10:23.769 "num_base_bdevs": 4, 00:10:23.769 "num_base_bdevs_discovered": 3, 00:10:23.769 "num_base_bdevs_operational": 4, 00:10:23.769 "base_bdevs_list": [ 00:10:23.770 { 00:10:23.770 "name": "BaseBdev1", 00:10:23.770 "uuid": "679ab0cb-a496-4224-a028-7637dc39ccf6", 00:10:23.770 "is_configured": true, 00:10:23.770 "data_offset": 2048, 00:10:23.770 "data_size": 63488 00:10:23.770 }, 00:10:23.770 { 00:10:23.770 "name": "BaseBdev2", 00:10:23.770 "uuid": "039b3f87-30fa-4c3e-9a28-3b7ec4e515d5", 00:10:23.770 "is_configured": true, 00:10:23.770 "data_offset": 2048, 00:10:23.770 "data_size": 63488 00:10:23.770 }, 00:10:23.770 { 00:10:23.770 "name": "BaseBdev3", 00:10:23.770 "uuid": "131452d3-95ff-40d6-91ad-9f09095c8ddc", 00:10:23.770 "is_configured": true, 00:10:23.770 "data_offset": 2048, 00:10:23.770 "data_size": 63488 00:10:23.770 }, 00:10:23.770 { 00:10:23.770 "name": "BaseBdev4", 00:10:23.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.770 "is_configured": false, 00:10:23.770 "data_offset": 0, 00:10:23.770 "data_size": 0 00:10:23.770 } 00:10:23.770 ] 00:10:23.770 }' 00:10:23.770 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.770 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.339 [2024-10-09 01:30:22.987965] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.339 [2024-10-09 01:30:22.988320] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.339 [2024-10-09 01:30:22.988389] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.339 BaseBdev4 00:10:24.339 [2024-10-09 01:30:22.988793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:24.339 [2024-10-09 01:30:22.988958] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.339 [2024-10-09 01:30:22.989011] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:24.339 [2024-10-09 01:30:22.989193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.339 01:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.339 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:24.339 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.339 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.339 [ 00:10:24.339 { 00:10:24.339 "name": "BaseBdev4", 00:10:24.339 "aliases": [ 00:10:24.339 "62e0a5c0-8628-49dd-b714-b461f3adfbc3" 00:10:24.339 ], 00:10:24.339 "product_name": "Malloc disk", 00:10:24.339 "block_size": 512, 00:10:24.339 "num_blocks": 65536, 00:10:24.339 "uuid": "62e0a5c0-8628-49dd-b714-b461f3adfbc3", 00:10:24.339 "assigned_rate_limits": { 00:10:24.339 "rw_ios_per_sec": 0, 00:10:24.339 "rw_mbytes_per_sec": 0, 00:10:24.339 "r_mbytes_per_sec": 0, 00:10:24.339 "w_mbytes_per_sec": 0 00:10:24.339 }, 00:10:24.339 "claimed": true, 00:10:24.339 "claim_type": "exclusive_write", 00:10:24.339 "zoned": false, 00:10:24.339 "supported_io_types": { 00:10:24.339 "read": true, 00:10:24.339 "write": true, 00:10:24.340 "unmap": true, 00:10:24.340 "flush": true, 00:10:24.340 "reset": true, 00:10:24.340 "nvme_admin": false, 00:10:24.340 "nvme_io": false, 00:10:24.340 "nvme_io_md": false, 00:10:24.340 "write_zeroes": true, 00:10:24.340 "zcopy": true, 00:10:24.340 "get_zone_info": false, 00:10:24.340 "zone_management": false, 00:10:24.340 "zone_append": false, 00:10:24.340 "compare": false, 00:10:24.340 "compare_and_write": false, 00:10:24.340 "abort": true, 00:10:24.340 "seek_hole": false, 00:10:24.340 "seek_data": false, 00:10:24.340 "copy": true, 00:10:24.340 "nvme_iov_md": false 00:10:24.340 }, 00:10:24.340 "memory_domains": [ 00:10:24.340 { 00:10:24.340 "dma_device_id": "system", 00:10:24.340 "dma_device_type": 1 00:10:24.340 }, 00:10:24.340 { 00:10:24.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.340 "dma_device_type": 2 00:10:24.340 } 00:10:24.340 ], 00:10:24.340 "driver_specific": {} 00:10:24.340 } 00:10:24.340 ] 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.340 "name": "Existed_Raid", 00:10:24.340 "uuid": "a65f7ce1-b2b5-4557-914e-f373f8736306", 00:10:24.340 "strip_size_kb": 64, 00:10:24.340 "state": "online", 00:10:24.340 "raid_level": "concat", 00:10:24.340 "superblock": true, 00:10:24.340 "num_base_bdevs": 4, 00:10:24.340 "num_base_bdevs_discovered": 4, 00:10:24.340 "num_base_bdevs_operational": 4, 00:10:24.340 "base_bdevs_list": [ 00:10:24.340 { 00:10:24.340 "name": "BaseBdev1", 00:10:24.340 "uuid": "679ab0cb-a496-4224-a028-7637dc39ccf6", 00:10:24.340 "is_configured": true, 00:10:24.340 "data_offset": 2048, 00:10:24.340 "data_size": 63488 00:10:24.340 }, 00:10:24.340 { 00:10:24.340 "name": "BaseBdev2", 00:10:24.340 "uuid": "039b3f87-30fa-4c3e-9a28-3b7ec4e515d5", 00:10:24.340 "is_configured": true, 00:10:24.340 "data_offset": 2048, 00:10:24.340 "data_size": 63488 00:10:24.340 }, 00:10:24.340 { 00:10:24.340 "name": "BaseBdev3", 00:10:24.340 "uuid": "131452d3-95ff-40d6-91ad-9f09095c8ddc", 00:10:24.340 "is_configured": true, 00:10:24.340 "data_offset": 2048, 00:10:24.340 "data_size": 63488 00:10:24.340 }, 00:10:24.340 { 00:10:24.340 "name": "BaseBdev4", 00:10:24.340 "uuid": "62e0a5c0-8628-49dd-b714-b461f3adfbc3", 00:10:24.340 "is_configured": true, 00:10:24.340 "data_offset": 2048, 00:10:24.340 "data_size": 63488 00:10:24.340 } 00:10:24.340 ] 00:10:24.340 }' 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.340 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.600 [2024-10-09 01:30:23.472413] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.600 "name": "Existed_Raid", 00:10:24.600 "aliases": [ 00:10:24.600 "a65f7ce1-b2b5-4557-914e-f373f8736306" 00:10:24.600 ], 00:10:24.600 "product_name": "Raid Volume", 00:10:24.600 "block_size": 512, 00:10:24.600 "num_blocks": 253952, 00:10:24.600 "uuid": "a65f7ce1-b2b5-4557-914e-f373f8736306", 00:10:24.600 "assigned_rate_limits": { 00:10:24.600 "rw_ios_per_sec": 0, 00:10:24.600 "rw_mbytes_per_sec": 0, 00:10:24.600 "r_mbytes_per_sec": 0, 00:10:24.600 "w_mbytes_per_sec": 0 00:10:24.600 }, 00:10:24.600 "claimed": false, 00:10:24.600 "zoned": false, 00:10:24.600 "supported_io_types": { 00:10:24.600 "read": true, 00:10:24.600 "write": true, 00:10:24.600 "unmap": true, 00:10:24.600 "flush": true, 00:10:24.600 "reset": true, 00:10:24.600 "nvme_admin": false, 00:10:24.600 "nvme_io": false, 00:10:24.600 "nvme_io_md": false, 00:10:24.600 "write_zeroes": true, 00:10:24.600 "zcopy": false, 00:10:24.600 "get_zone_info": false, 00:10:24.600 "zone_management": false, 00:10:24.600 "zone_append": false, 00:10:24.600 "compare": false, 00:10:24.600 "compare_and_write": false, 00:10:24.600 "abort": false, 00:10:24.600 "seek_hole": false, 00:10:24.600 "seek_data": false, 00:10:24.600 "copy": false, 00:10:24.600 "nvme_iov_md": false 00:10:24.600 }, 00:10:24.600 "memory_domains": [ 00:10:24.600 { 00:10:24.600 "dma_device_id": "system", 00:10:24.600 "dma_device_type": 1 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.600 "dma_device_type": 2 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "dma_device_id": "system", 00:10:24.600 "dma_device_type": 1 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.600 "dma_device_type": 2 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "dma_device_id": "system", 00:10:24.600 "dma_device_type": 1 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.600 "dma_device_type": 2 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "dma_device_id": "system", 00:10:24.600 "dma_device_type": 1 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.600 "dma_device_type": 2 00:10:24.600 } 00:10:24.600 ], 00:10:24.600 "driver_specific": { 00:10:24.600 "raid": { 00:10:24.600 "uuid": "a65f7ce1-b2b5-4557-914e-f373f8736306", 00:10:24.600 "strip_size_kb": 64, 00:10:24.600 "state": "online", 00:10:24.600 "raid_level": "concat", 00:10:24.600 "superblock": true, 00:10:24.600 "num_base_bdevs": 4, 00:10:24.600 "num_base_bdevs_discovered": 4, 00:10:24.600 "num_base_bdevs_operational": 4, 00:10:24.600 "base_bdevs_list": [ 00:10:24.600 { 00:10:24.600 "name": "BaseBdev1", 00:10:24.600 "uuid": "679ab0cb-a496-4224-a028-7637dc39ccf6", 00:10:24.600 "is_configured": true, 00:10:24.600 "data_offset": 2048, 00:10:24.600 "data_size": 63488 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "name": "BaseBdev2", 00:10:24.600 "uuid": "039b3f87-30fa-4c3e-9a28-3b7ec4e515d5", 00:10:24.600 "is_configured": true, 00:10:24.600 "data_offset": 2048, 00:10:24.600 "data_size": 63488 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "name": "BaseBdev3", 00:10:24.600 "uuid": "131452d3-95ff-40d6-91ad-9f09095c8ddc", 00:10:24.600 "is_configured": true, 00:10:24.600 "data_offset": 2048, 00:10:24.600 "data_size": 63488 00:10:24.600 }, 00:10:24.600 { 00:10:24.600 "name": "BaseBdev4", 00:10:24.600 "uuid": "62e0a5c0-8628-49dd-b714-b461f3adfbc3", 00:10:24.600 "is_configured": true, 00:10:24.600 "data_offset": 2048, 00:10:24.600 "data_size": 63488 00:10:24.600 } 00:10:24.600 ] 00:10:24.600 } 00:10:24.600 } 00:10:24.600 }' 00:10:24.600 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:24.859 BaseBdev2 00:10:24.859 BaseBdev3 00:10:24.859 BaseBdev4' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.859 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.860 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.860 [2024-10-09 01:30:23.744240] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.860 [2024-10-09 01:30:23.744303] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.860 [2024-10-09 01:30:23.744390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.119 "name": "Existed_Raid", 00:10:25.119 "uuid": "a65f7ce1-b2b5-4557-914e-f373f8736306", 00:10:25.119 "strip_size_kb": 64, 00:10:25.119 "state": "offline", 00:10:25.119 "raid_level": "concat", 00:10:25.119 "superblock": true, 00:10:25.119 "num_base_bdevs": 4, 00:10:25.119 "num_base_bdevs_discovered": 3, 00:10:25.119 "num_base_bdevs_operational": 3, 00:10:25.119 "base_bdevs_list": [ 00:10:25.119 { 00:10:25.119 "name": null, 00:10:25.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.119 "is_configured": false, 00:10:25.119 "data_offset": 0, 00:10:25.119 "data_size": 63488 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "name": "BaseBdev2", 00:10:25.119 "uuid": "039b3f87-30fa-4c3e-9a28-3b7ec4e515d5", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "name": "BaseBdev3", 00:10:25.119 "uuid": "131452d3-95ff-40d6-91ad-9f09095c8ddc", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 }, 00:10:25.119 { 00:10:25.119 "name": "BaseBdev4", 00:10:25.119 "uuid": "62e0a5c0-8628-49dd-b714-b461f3adfbc3", 00:10:25.119 "is_configured": true, 00:10:25.119 "data_offset": 2048, 00:10:25.119 "data_size": 63488 00:10:25.119 } 00:10:25.119 ] 00:10:25.119 }' 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.119 01:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.378 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:25.378 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.378 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.378 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.378 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.378 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 [2024-10-09 01:30:24.313084] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 [2024-10-09 01:30:24.389342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 [2024-10-09 01:30:24.464917] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:25.638 [2024-10-09 01:30:24.465021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.638 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.898 BaseBdev2 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.898 [ 00:10:25.898 { 00:10:25.898 "name": "BaseBdev2", 00:10:25.898 "aliases": [ 00:10:25.898 "7fa56e4d-c816-4362-b66e-04df12e7639b" 00:10:25.898 ], 00:10:25.898 "product_name": "Malloc disk", 00:10:25.898 "block_size": 512, 00:10:25.898 "num_blocks": 65536, 00:10:25.898 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:25.898 "assigned_rate_limits": { 00:10:25.898 "rw_ios_per_sec": 0, 00:10:25.898 "rw_mbytes_per_sec": 0, 00:10:25.898 "r_mbytes_per_sec": 0, 00:10:25.898 "w_mbytes_per_sec": 0 00:10:25.898 }, 00:10:25.898 "claimed": false, 00:10:25.898 "zoned": false, 00:10:25.898 "supported_io_types": { 00:10:25.898 "read": true, 00:10:25.898 "write": true, 00:10:25.898 "unmap": true, 00:10:25.898 "flush": true, 00:10:25.898 "reset": true, 00:10:25.898 "nvme_admin": false, 00:10:25.898 "nvme_io": false, 00:10:25.898 "nvme_io_md": false, 00:10:25.898 "write_zeroes": true, 00:10:25.898 "zcopy": true, 00:10:25.898 "get_zone_info": false, 00:10:25.898 "zone_management": false, 00:10:25.898 "zone_append": false, 00:10:25.898 "compare": false, 00:10:25.898 "compare_and_write": false, 00:10:25.898 "abort": true, 00:10:25.898 "seek_hole": false, 00:10:25.898 "seek_data": false, 00:10:25.898 "copy": true, 00:10:25.898 "nvme_iov_md": false 00:10:25.898 }, 00:10:25.898 "memory_domains": [ 00:10:25.898 { 00:10:25.898 "dma_device_id": "system", 00:10:25.898 "dma_device_type": 1 00:10:25.898 }, 00:10:25.898 { 00:10:25.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.898 "dma_device_type": 2 00:10:25.898 } 00:10:25.898 ], 00:10:25.898 "driver_specific": {} 00:10:25.898 } 00:10:25.898 ] 00:10:25.898 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.899 BaseBdev3 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.899 [ 00:10:25.899 { 00:10:25.899 "name": "BaseBdev3", 00:10:25.899 "aliases": [ 00:10:25.899 "62a6a9ed-df69-49bf-b7ad-d67e5f80f336" 00:10:25.899 ], 00:10:25.899 "product_name": "Malloc disk", 00:10:25.899 "block_size": 512, 00:10:25.899 "num_blocks": 65536, 00:10:25.899 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:25.899 "assigned_rate_limits": { 00:10:25.899 "rw_ios_per_sec": 0, 00:10:25.899 "rw_mbytes_per_sec": 0, 00:10:25.899 "r_mbytes_per_sec": 0, 00:10:25.899 "w_mbytes_per_sec": 0 00:10:25.899 }, 00:10:25.899 "claimed": false, 00:10:25.899 "zoned": false, 00:10:25.899 "supported_io_types": { 00:10:25.899 "read": true, 00:10:25.899 "write": true, 00:10:25.899 "unmap": true, 00:10:25.899 "flush": true, 00:10:25.899 "reset": true, 00:10:25.899 "nvme_admin": false, 00:10:25.899 "nvme_io": false, 00:10:25.899 "nvme_io_md": false, 00:10:25.899 "write_zeroes": true, 00:10:25.899 "zcopy": true, 00:10:25.899 "get_zone_info": false, 00:10:25.899 "zone_management": false, 00:10:25.899 "zone_append": false, 00:10:25.899 "compare": false, 00:10:25.899 "compare_and_write": false, 00:10:25.899 "abort": true, 00:10:25.899 "seek_hole": false, 00:10:25.899 "seek_data": false, 00:10:25.899 "copy": true, 00:10:25.899 "nvme_iov_md": false 00:10:25.899 }, 00:10:25.899 "memory_domains": [ 00:10:25.899 { 00:10:25.899 "dma_device_id": "system", 00:10:25.899 "dma_device_type": 1 00:10:25.899 }, 00:10:25.899 { 00:10:25.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.899 "dma_device_type": 2 00:10:25.899 } 00:10:25.899 ], 00:10:25.899 "driver_specific": {} 00:10:25.899 } 00:10:25.899 ] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.899 BaseBdev4 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.899 [ 00:10:25.899 { 00:10:25.899 "name": "BaseBdev4", 00:10:25.899 "aliases": [ 00:10:25.899 "3adf9b33-1e37-4a39-87d7-b21b916b6b48" 00:10:25.899 ], 00:10:25.899 "product_name": "Malloc disk", 00:10:25.899 "block_size": 512, 00:10:25.899 "num_blocks": 65536, 00:10:25.899 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:25.899 "assigned_rate_limits": { 00:10:25.899 "rw_ios_per_sec": 0, 00:10:25.899 "rw_mbytes_per_sec": 0, 00:10:25.899 "r_mbytes_per_sec": 0, 00:10:25.899 "w_mbytes_per_sec": 0 00:10:25.899 }, 00:10:25.899 "claimed": false, 00:10:25.899 "zoned": false, 00:10:25.899 "supported_io_types": { 00:10:25.899 "read": true, 00:10:25.899 "write": true, 00:10:25.899 "unmap": true, 00:10:25.899 "flush": true, 00:10:25.899 "reset": true, 00:10:25.899 "nvme_admin": false, 00:10:25.899 "nvme_io": false, 00:10:25.899 "nvme_io_md": false, 00:10:25.899 "write_zeroes": true, 00:10:25.899 "zcopy": true, 00:10:25.899 "get_zone_info": false, 00:10:25.899 "zone_management": false, 00:10:25.899 "zone_append": false, 00:10:25.899 "compare": false, 00:10:25.899 "compare_and_write": false, 00:10:25.899 "abort": true, 00:10:25.899 "seek_hole": false, 00:10:25.899 "seek_data": false, 00:10:25.899 "copy": true, 00:10:25.899 "nvme_iov_md": false 00:10:25.899 }, 00:10:25.899 "memory_domains": [ 00:10:25.899 { 00:10:25.899 "dma_device_id": "system", 00:10:25.899 "dma_device_type": 1 00:10:25.899 }, 00:10:25.899 { 00:10:25.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.899 "dma_device_type": 2 00:10:25.899 } 00:10:25.899 ], 00:10:25.899 "driver_specific": {} 00:10:25.899 } 00:10:25.899 ] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.899 [2024-10-09 01:30:24.713491] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.899 [2024-10-09 01:30:24.713591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.899 [2024-10-09 01:30:24.713663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.899 [2024-10-09 01:30:24.715762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.899 [2024-10-09 01:30:24.715846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.899 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.899 "name": "Existed_Raid", 00:10:25.899 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:25.899 "strip_size_kb": 64, 00:10:25.899 "state": "configuring", 00:10:25.899 "raid_level": "concat", 00:10:25.899 "superblock": true, 00:10:25.899 "num_base_bdevs": 4, 00:10:25.899 "num_base_bdevs_discovered": 3, 00:10:25.899 "num_base_bdevs_operational": 4, 00:10:25.899 "base_bdevs_list": [ 00:10:25.899 { 00:10:25.899 "name": "BaseBdev1", 00:10:25.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.900 "is_configured": false, 00:10:25.900 "data_offset": 0, 00:10:25.900 "data_size": 0 00:10:25.900 }, 00:10:25.900 { 00:10:25.900 "name": "BaseBdev2", 00:10:25.900 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:25.900 "is_configured": true, 00:10:25.900 "data_offset": 2048, 00:10:25.900 "data_size": 63488 00:10:25.900 }, 00:10:25.900 { 00:10:25.900 "name": "BaseBdev3", 00:10:25.900 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:25.900 "is_configured": true, 00:10:25.900 "data_offset": 2048, 00:10:25.900 "data_size": 63488 00:10:25.900 }, 00:10:25.900 { 00:10:25.900 "name": "BaseBdev4", 00:10:25.900 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:25.900 "is_configured": true, 00:10:25.900 "data_offset": 2048, 00:10:25.900 "data_size": 63488 00:10:25.900 } 00:10:25.900 ] 00:10:25.900 }' 00:10:25.900 01:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.900 01:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.468 [2024-10-09 01:30:25.121604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.468 "name": "Existed_Raid", 00:10:26.468 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:26.468 "strip_size_kb": 64, 00:10:26.468 "state": "configuring", 00:10:26.468 "raid_level": "concat", 00:10:26.468 "superblock": true, 00:10:26.468 "num_base_bdevs": 4, 00:10:26.468 "num_base_bdevs_discovered": 2, 00:10:26.468 "num_base_bdevs_operational": 4, 00:10:26.468 "base_bdevs_list": [ 00:10:26.468 { 00:10:26.468 "name": "BaseBdev1", 00:10:26.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.468 "is_configured": false, 00:10:26.468 "data_offset": 0, 00:10:26.468 "data_size": 0 00:10:26.468 }, 00:10:26.468 { 00:10:26.468 "name": null, 00:10:26.468 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:26.468 "is_configured": false, 00:10:26.468 "data_offset": 0, 00:10:26.468 "data_size": 63488 00:10:26.468 }, 00:10:26.468 { 00:10:26.468 "name": "BaseBdev3", 00:10:26.468 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:26.468 "is_configured": true, 00:10:26.468 "data_offset": 2048, 00:10:26.468 "data_size": 63488 00:10:26.468 }, 00:10:26.468 { 00:10:26.468 "name": "BaseBdev4", 00:10:26.468 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:26.468 "is_configured": true, 00:10:26.468 "data_offset": 2048, 00:10:26.468 "data_size": 63488 00:10:26.468 } 00:10:26.468 ] 00:10:26.468 }' 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.468 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.727 [2024-10-09 01:30:25.566369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.727 BaseBdev1 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.727 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 [ 00:10:26.728 { 00:10:26.728 "name": "BaseBdev1", 00:10:26.728 "aliases": [ 00:10:26.728 "4f1ed740-2a43-4a7d-85b4-68147253660b" 00:10:26.728 ], 00:10:26.728 "product_name": "Malloc disk", 00:10:26.728 "block_size": 512, 00:10:26.728 "num_blocks": 65536, 00:10:26.728 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:26.728 "assigned_rate_limits": { 00:10:26.728 "rw_ios_per_sec": 0, 00:10:26.728 "rw_mbytes_per_sec": 0, 00:10:26.728 "r_mbytes_per_sec": 0, 00:10:26.728 "w_mbytes_per_sec": 0 00:10:26.728 }, 00:10:26.728 "claimed": true, 00:10:26.728 "claim_type": "exclusive_write", 00:10:26.728 "zoned": false, 00:10:26.728 "supported_io_types": { 00:10:26.728 "read": true, 00:10:26.728 "write": true, 00:10:26.728 "unmap": true, 00:10:26.728 "flush": true, 00:10:26.728 "reset": true, 00:10:26.728 "nvme_admin": false, 00:10:26.728 "nvme_io": false, 00:10:26.728 "nvme_io_md": false, 00:10:26.728 "write_zeroes": true, 00:10:26.728 "zcopy": true, 00:10:26.728 "get_zone_info": false, 00:10:26.728 "zone_management": false, 00:10:26.728 "zone_append": false, 00:10:26.728 "compare": false, 00:10:26.728 "compare_and_write": false, 00:10:26.728 "abort": true, 00:10:26.728 "seek_hole": false, 00:10:26.728 "seek_data": false, 00:10:26.728 "copy": true, 00:10:26.728 "nvme_iov_md": false 00:10:26.728 }, 00:10:26.728 "memory_domains": [ 00:10:26.728 { 00:10:26.728 "dma_device_id": "system", 00:10:26.728 "dma_device_type": 1 00:10:26.728 }, 00:10:26.728 { 00:10:26.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.728 "dma_device_type": 2 00:10:26.728 } 00:10:26.728 ], 00:10:26.728 "driver_specific": {} 00:10:26.728 } 00:10:26.728 ] 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.728 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.988 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.988 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.988 "name": "Existed_Raid", 00:10:26.988 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:26.988 "strip_size_kb": 64, 00:10:26.988 "state": "configuring", 00:10:26.988 "raid_level": "concat", 00:10:26.988 "superblock": true, 00:10:26.988 "num_base_bdevs": 4, 00:10:26.988 "num_base_bdevs_discovered": 3, 00:10:26.988 "num_base_bdevs_operational": 4, 00:10:26.988 "base_bdevs_list": [ 00:10:26.988 { 00:10:26.988 "name": "BaseBdev1", 00:10:26.988 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:26.988 "is_configured": true, 00:10:26.988 "data_offset": 2048, 00:10:26.988 "data_size": 63488 00:10:26.988 }, 00:10:26.988 { 00:10:26.988 "name": null, 00:10:26.988 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:26.988 "is_configured": false, 00:10:26.988 "data_offset": 0, 00:10:26.988 "data_size": 63488 00:10:26.988 }, 00:10:26.988 { 00:10:26.988 "name": "BaseBdev3", 00:10:26.988 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:26.988 "is_configured": true, 00:10:26.988 "data_offset": 2048, 00:10:26.988 "data_size": 63488 00:10:26.988 }, 00:10:26.988 { 00:10:26.988 "name": "BaseBdev4", 00:10:26.988 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:26.988 "is_configured": true, 00:10:26.988 "data_offset": 2048, 00:10:26.988 "data_size": 63488 00:10:26.988 } 00:10:26.988 ] 00:10:26.988 }' 00:10:26.988 01:30:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.988 01:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.256 [2024-10-09 01:30:26.058560] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.256 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.257 "name": "Existed_Raid", 00:10:27.257 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:27.257 "strip_size_kb": 64, 00:10:27.257 "state": "configuring", 00:10:27.257 "raid_level": "concat", 00:10:27.257 "superblock": true, 00:10:27.257 "num_base_bdevs": 4, 00:10:27.257 "num_base_bdevs_discovered": 2, 00:10:27.257 "num_base_bdevs_operational": 4, 00:10:27.257 "base_bdevs_list": [ 00:10:27.257 { 00:10:27.257 "name": "BaseBdev1", 00:10:27.257 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:27.257 "is_configured": true, 00:10:27.257 "data_offset": 2048, 00:10:27.257 "data_size": 63488 00:10:27.257 }, 00:10:27.257 { 00:10:27.257 "name": null, 00:10:27.257 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:27.257 "is_configured": false, 00:10:27.257 "data_offset": 0, 00:10:27.257 "data_size": 63488 00:10:27.257 }, 00:10:27.257 { 00:10:27.257 "name": null, 00:10:27.257 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:27.257 "is_configured": false, 00:10:27.257 "data_offset": 0, 00:10:27.257 "data_size": 63488 00:10:27.257 }, 00:10:27.257 { 00:10:27.257 "name": "BaseBdev4", 00:10:27.257 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:27.257 "is_configured": true, 00:10:27.257 "data_offset": 2048, 00:10:27.257 "data_size": 63488 00:10:27.257 } 00:10:27.257 ] 00:10:27.257 }' 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.257 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.830 [2024-10-09 01:30:26.494707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.830 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.830 "name": "Existed_Raid", 00:10:27.830 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:27.830 "strip_size_kb": 64, 00:10:27.830 "state": "configuring", 00:10:27.830 "raid_level": "concat", 00:10:27.830 "superblock": true, 00:10:27.830 "num_base_bdevs": 4, 00:10:27.830 "num_base_bdevs_discovered": 3, 00:10:27.830 "num_base_bdevs_operational": 4, 00:10:27.830 "base_bdevs_list": [ 00:10:27.831 { 00:10:27.831 "name": "BaseBdev1", 00:10:27.831 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:27.831 "is_configured": true, 00:10:27.831 "data_offset": 2048, 00:10:27.831 "data_size": 63488 00:10:27.831 }, 00:10:27.831 { 00:10:27.831 "name": null, 00:10:27.831 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:27.831 "is_configured": false, 00:10:27.831 "data_offset": 0, 00:10:27.831 "data_size": 63488 00:10:27.831 }, 00:10:27.831 { 00:10:27.831 "name": "BaseBdev3", 00:10:27.831 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:27.831 "is_configured": true, 00:10:27.831 "data_offset": 2048, 00:10:27.831 "data_size": 63488 00:10:27.831 }, 00:10:27.831 { 00:10:27.831 "name": "BaseBdev4", 00:10:27.831 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:27.831 "is_configured": true, 00:10:27.831 "data_offset": 2048, 00:10:27.831 "data_size": 63488 00:10:27.831 } 00:10:27.831 ] 00:10:27.831 }' 00:10:27.831 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.831 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.090 [2024-10-09 01:30:26.942847] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.090 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.091 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.091 01:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.091 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.091 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.353 01:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.353 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.353 "name": "Existed_Raid", 00:10:28.353 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:28.353 "strip_size_kb": 64, 00:10:28.353 "state": "configuring", 00:10:28.353 "raid_level": "concat", 00:10:28.353 "superblock": true, 00:10:28.353 "num_base_bdevs": 4, 00:10:28.353 "num_base_bdevs_discovered": 2, 00:10:28.353 "num_base_bdevs_operational": 4, 00:10:28.353 "base_bdevs_list": [ 00:10:28.353 { 00:10:28.353 "name": null, 00:10:28.353 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:28.353 "is_configured": false, 00:10:28.354 "data_offset": 0, 00:10:28.354 "data_size": 63488 00:10:28.354 }, 00:10:28.354 { 00:10:28.354 "name": null, 00:10:28.354 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:28.354 "is_configured": false, 00:10:28.354 "data_offset": 0, 00:10:28.354 "data_size": 63488 00:10:28.354 }, 00:10:28.354 { 00:10:28.354 "name": "BaseBdev3", 00:10:28.354 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:28.354 "is_configured": true, 00:10:28.354 "data_offset": 2048, 00:10:28.354 "data_size": 63488 00:10:28.354 }, 00:10:28.354 { 00:10:28.354 "name": "BaseBdev4", 00:10:28.354 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:28.354 "is_configured": true, 00:10:28.354 "data_offset": 2048, 00:10:28.354 "data_size": 63488 00:10:28.354 } 00:10:28.354 ] 00:10:28.354 }' 00:10:28.354 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.354 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.618 [2024-10-09 01:30:27.474867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.618 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.878 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.878 "name": "Existed_Raid", 00:10:28.878 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:28.878 "strip_size_kb": 64, 00:10:28.878 "state": "configuring", 00:10:28.878 "raid_level": "concat", 00:10:28.878 "superblock": true, 00:10:28.878 "num_base_bdevs": 4, 00:10:28.878 "num_base_bdevs_discovered": 3, 00:10:28.878 "num_base_bdevs_operational": 4, 00:10:28.878 "base_bdevs_list": [ 00:10:28.878 { 00:10:28.878 "name": null, 00:10:28.878 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:28.878 "is_configured": false, 00:10:28.878 "data_offset": 0, 00:10:28.878 "data_size": 63488 00:10:28.878 }, 00:10:28.878 { 00:10:28.878 "name": "BaseBdev2", 00:10:28.878 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:28.878 "is_configured": true, 00:10:28.878 "data_offset": 2048, 00:10:28.878 "data_size": 63488 00:10:28.878 }, 00:10:28.878 { 00:10:28.878 "name": "BaseBdev3", 00:10:28.878 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:28.878 "is_configured": true, 00:10:28.878 "data_offset": 2048, 00:10:28.878 "data_size": 63488 00:10:28.878 }, 00:10:28.878 { 00:10:28.878 "name": "BaseBdev4", 00:10:28.878 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:28.878 "is_configured": true, 00:10:28.878 "data_offset": 2048, 00:10:28.878 "data_size": 63488 00:10:28.878 } 00:10:28.878 ] 00:10:28.878 }' 00:10:28.878 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.878 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f1ed740-2a43-4a7d-85b4-68147253660b 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.138 01:30:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.138 [2024-10-09 01:30:28.012279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:29.138 [2024-10-09 01:30:28.012604] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.138 [2024-10-09 01:30:28.012660] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:29.138 [2024-10-09 01:30:28.012971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:10:29.138 NewBaseBdev 00:10:29.138 [2024-10-09 01:30:28.013141] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.138 [2024-10-09 01:30:28.013153] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:29.138 [2024-10-09 01:30:28.013261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.138 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.398 [ 00:10:29.398 { 00:10:29.398 "name": "NewBaseBdev", 00:10:29.398 "aliases": [ 00:10:29.398 "4f1ed740-2a43-4a7d-85b4-68147253660b" 00:10:29.398 ], 00:10:29.398 "product_name": "Malloc disk", 00:10:29.398 "block_size": 512, 00:10:29.398 "num_blocks": 65536, 00:10:29.398 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:29.398 "assigned_rate_limits": { 00:10:29.398 "rw_ios_per_sec": 0, 00:10:29.398 "rw_mbytes_per_sec": 0, 00:10:29.398 "r_mbytes_per_sec": 0, 00:10:29.398 "w_mbytes_per_sec": 0 00:10:29.398 }, 00:10:29.398 "claimed": true, 00:10:29.398 "claim_type": "exclusive_write", 00:10:29.398 "zoned": false, 00:10:29.398 "supported_io_types": { 00:10:29.398 "read": true, 00:10:29.398 "write": true, 00:10:29.398 "unmap": true, 00:10:29.398 "flush": true, 00:10:29.398 "reset": true, 00:10:29.398 "nvme_admin": false, 00:10:29.398 "nvme_io": false, 00:10:29.398 "nvme_io_md": false, 00:10:29.398 "write_zeroes": true, 00:10:29.398 "zcopy": true, 00:10:29.398 "get_zone_info": false, 00:10:29.398 "zone_management": false, 00:10:29.398 "zone_append": false, 00:10:29.398 "compare": false, 00:10:29.398 "compare_and_write": false, 00:10:29.398 "abort": true, 00:10:29.398 "seek_hole": false, 00:10:29.398 "seek_data": false, 00:10:29.398 "copy": true, 00:10:29.398 "nvme_iov_md": false 00:10:29.398 }, 00:10:29.398 "memory_domains": [ 00:10:29.398 { 00:10:29.398 "dma_device_id": "system", 00:10:29.398 "dma_device_type": 1 00:10:29.398 }, 00:10:29.398 { 00:10:29.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.398 "dma_device_type": 2 00:10:29.398 } 00:10:29.398 ], 00:10:29.398 "driver_specific": {} 00:10:29.398 } 00:10:29.398 ] 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.398 "name": "Existed_Raid", 00:10:29.398 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:29.398 "strip_size_kb": 64, 00:10:29.398 "state": "online", 00:10:29.398 "raid_level": "concat", 00:10:29.398 "superblock": true, 00:10:29.398 "num_base_bdevs": 4, 00:10:29.398 "num_base_bdevs_discovered": 4, 00:10:29.398 "num_base_bdevs_operational": 4, 00:10:29.398 "base_bdevs_list": [ 00:10:29.398 { 00:10:29.398 "name": "NewBaseBdev", 00:10:29.398 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:29.398 "is_configured": true, 00:10:29.398 "data_offset": 2048, 00:10:29.398 "data_size": 63488 00:10:29.398 }, 00:10:29.398 { 00:10:29.398 "name": "BaseBdev2", 00:10:29.398 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:29.398 "is_configured": true, 00:10:29.398 "data_offset": 2048, 00:10:29.398 "data_size": 63488 00:10:29.398 }, 00:10:29.398 { 00:10:29.398 "name": "BaseBdev3", 00:10:29.398 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:29.398 "is_configured": true, 00:10:29.398 "data_offset": 2048, 00:10:29.398 "data_size": 63488 00:10:29.398 }, 00:10:29.398 { 00:10:29.398 "name": "BaseBdev4", 00:10:29.398 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:29.398 "is_configured": true, 00:10:29.398 "data_offset": 2048, 00:10:29.398 "data_size": 63488 00:10:29.398 } 00:10:29.398 ] 00:10:29.398 }' 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.398 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.658 [2024-10-09 01:30:28.500792] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.658 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.658 "name": "Existed_Raid", 00:10:29.658 "aliases": [ 00:10:29.658 "332a30da-4c0c-4241-9459-c1e5bb846dcf" 00:10:29.658 ], 00:10:29.658 "product_name": "Raid Volume", 00:10:29.658 "block_size": 512, 00:10:29.658 "num_blocks": 253952, 00:10:29.658 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:29.658 "assigned_rate_limits": { 00:10:29.658 "rw_ios_per_sec": 0, 00:10:29.658 "rw_mbytes_per_sec": 0, 00:10:29.658 "r_mbytes_per_sec": 0, 00:10:29.658 "w_mbytes_per_sec": 0 00:10:29.658 }, 00:10:29.658 "claimed": false, 00:10:29.658 "zoned": false, 00:10:29.658 "supported_io_types": { 00:10:29.658 "read": true, 00:10:29.658 "write": true, 00:10:29.658 "unmap": true, 00:10:29.659 "flush": true, 00:10:29.659 "reset": true, 00:10:29.659 "nvme_admin": false, 00:10:29.659 "nvme_io": false, 00:10:29.659 "nvme_io_md": false, 00:10:29.659 "write_zeroes": true, 00:10:29.659 "zcopy": false, 00:10:29.659 "get_zone_info": false, 00:10:29.659 "zone_management": false, 00:10:29.659 "zone_append": false, 00:10:29.659 "compare": false, 00:10:29.659 "compare_and_write": false, 00:10:29.659 "abort": false, 00:10:29.659 "seek_hole": false, 00:10:29.659 "seek_data": false, 00:10:29.659 "copy": false, 00:10:29.659 "nvme_iov_md": false 00:10:29.659 }, 00:10:29.659 "memory_domains": [ 00:10:29.659 { 00:10:29.659 "dma_device_id": "system", 00:10:29.659 "dma_device_type": 1 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.659 "dma_device_type": 2 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "dma_device_id": "system", 00:10:29.659 "dma_device_type": 1 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.659 "dma_device_type": 2 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "dma_device_id": "system", 00:10:29.659 "dma_device_type": 1 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.659 "dma_device_type": 2 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "dma_device_id": "system", 00:10:29.659 "dma_device_type": 1 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.659 "dma_device_type": 2 00:10:29.659 } 00:10:29.659 ], 00:10:29.659 "driver_specific": { 00:10:29.659 "raid": { 00:10:29.659 "uuid": "332a30da-4c0c-4241-9459-c1e5bb846dcf", 00:10:29.659 "strip_size_kb": 64, 00:10:29.659 "state": "online", 00:10:29.659 "raid_level": "concat", 00:10:29.659 "superblock": true, 00:10:29.659 "num_base_bdevs": 4, 00:10:29.659 "num_base_bdevs_discovered": 4, 00:10:29.659 "num_base_bdevs_operational": 4, 00:10:29.659 "base_bdevs_list": [ 00:10:29.659 { 00:10:29.659 "name": "NewBaseBdev", 00:10:29.659 "uuid": "4f1ed740-2a43-4a7d-85b4-68147253660b", 00:10:29.659 "is_configured": true, 00:10:29.659 "data_offset": 2048, 00:10:29.659 "data_size": 63488 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "name": "BaseBdev2", 00:10:29.659 "uuid": "7fa56e4d-c816-4362-b66e-04df12e7639b", 00:10:29.659 "is_configured": true, 00:10:29.659 "data_offset": 2048, 00:10:29.659 "data_size": 63488 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "name": "BaseBdev3", 00:10:29.659 "uuid": "62a6a9ed-df69-49bf-b7ad-d67e5f80f336", 00:10:29.659 "is_configured": true, 00:10:29.659 "data_offset": 2048, 00:10:29.659 "data_size": 63488 00:10:29.659 }, 00:10:29.659 { 00:10:29.659 "name": "BaseBdev4", 00:10:29.659 "uuid": "3adf9b33-1e37-4a39-87d7-b21b916b6b48", 00:10:29.659 "is_configured": true, 00:10:29.659 "data_offset": 2048, 00:10:29.659 "data_size": 63488 00:10:29.659 } 00:10:29.659 ] 00:10:29.659 } 00:10:29.659 } 00:10:29.659 }' 00:10:29.659 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:29.919 BaseBdev2 00:10:29.919 BaseBdev3 00:10:29.919 BaseBdev4' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.919 [2024-10-09 01:30:28.800532] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.919 [2024-10-09 01:30:28.800557] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.919 [2024-10-09 01:30:28.800642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.919 [2024-10-09 01:30:28.800714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.919 [2024-10-09 01:30:28.800730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83951 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83951 ']' 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83951 00:10:29.919 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:30.179 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.179 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83951 00:10:30.179 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:30.179 killing process with pid 83951 00:10:30.179 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:30.180 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83951' 00:10:30.180 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83951 00:10:30.180 [2024-10-09 01:30:28.845723] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.180 01:30:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83951 00:10:30.180 [2024-10-09 01:30:28.918376] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.439 01:30:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:30.440 00:10:30.440 real 0m9.561s 00:10:30.440 user 0m15.882s 00:10:30.440 sys 0m2.114s 00:10:30.440 01:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.440 01:30:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.440 ************************************ 00:10:30.440 END TEST raid_state_function_test_sb 00:10:30.440 ************************************ 00:10:30.700 01:30:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:30.700 01:30:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:30.700 01:30:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.700 01:30:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.700 ************************************ 00:10:30.700 START TEST raid_superblock_test 00:10:30.700 ************************************ 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84598 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84598 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84598 ']' 00:10:30.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.700 01:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.700 [2024-10-09 01:30:29.450407] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:30.700 [2024-10-09 01:30:29.450643] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84598 ] 00:10:30.700 [2024-10-09 01:30:29.586704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:30.960 [2024-10-09 01:30:29.615514] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.960 [2024-10-09 01:30:29.684376] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.960 [2024-10-09 01:30:29.759743] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.960 [2024-10-09 01:30:29.759782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.529 malloc1 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.529 [2024-10-09 01:30:30.306628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:31.529 [2024-10-09 01:30:30.306759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.529 [2024-10-09 01:30:30.306805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:31.529 [2024-10-09 01:30:30.306843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.529 [2024-10-09 01:30:30.309295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.529 [2024-10-09 01:30:30.309379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:31.529 pt1 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.529 malloc2 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.529 [2024-10-09 01:30:30.361716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:31.529 [2024-10-09 01:30:30.361914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.529 [2024-10-09 01:30:30.362001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:31.529 [2024-10-09 01:30:30.362082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.529 [2024-10-09 01:30:30.367172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.529 [2024-10-09 01:30:30.367317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:31.529 pt2 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.529 malloc3 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.529 [2024-10-09 01:30:30.402511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:31.529 [2024-10-09 01:30:30.402633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.529 [2024-10-09 01:30:30.402674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:31.529 [2024-10-09 01:30:30.402704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.529 [2024-10-09 01:30:30.405072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.529 [2024-10-09 01:30:30.405143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:31.529 pt3 00:10:31.529 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.530 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.789 malloc4 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.789 [2024-10-09 01:30:30.441325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:31.789 [2024-10-09 01:30:30.441378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.789 [2024-10-09 01:30:30.441402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:31.789 [2024-10-09 01:30:30.441411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.789 [2024-10-09 01:30:30.443737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.789 [2024-10-09 01:30:30.443771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:31.789 pt4 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.789 [2024-10-09 01:30:30.453365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:31.789 [2024-10-09 01:30:30.455480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:31.789 [2024-10-09 01:30:30.455622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:31.789 [2024-10-09 01:30:30.455718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:31.789 [2024-10-09 01:30:30.455914] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:31.789 [2024-10-09 01:30:30.455958] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:31.789 [2024-10-09 01:30:30.456264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:31.789 [2024-10-09 01:30:30.456458] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:31.789 [2024-10-09 01:30:30.456507] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:31.789 [2024-10-09 01:30:30.456702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.789 "name": "raid_bdev1", 00:10:31.789 "uuid": "8890a8dc-f34e-4820-a5cb-6199cc47f2b6", 00:10:31.789 "strip_size_kb": 64, 00:10:31.789 "state": "online", 00:10:31.789 "raid_level": "concat", 00:10:31.789 "superblock": true, 00:10:31.789 "num_base_bdevs": 4, 00:10:31.789 "num_base_bdevs_discovered": 4, 00:10:31.789 "num_base_bdevs_operational": 4, 00:10:31.789 "base_bdevs_list": [ 00:10:31.789 { 00:10:31.789 "name": "pt1", 00:10:31.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.789 "is_configured": true, 00:10:31.789 "data_offset": 2048, 00:10:31.789 "data_size": 63488 00:10:31.789 }, 00:10:31.789 { 00:10:31.789 "name": "pt2", 00:10:31.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.789 "is_configured": true, 00:10:31.789 "data_offset": 2048, 00:10:31.789 "data_size": 63488 00:10:31.789 }, 00:10:31.789 { 00:10:31.789 "name": "pt3", 00:10:31.789 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:31.789 "is_configured": true, 00:10:31.789 "data_offset": 2048, 00:10:31.789 "data_size": 63488 00:10:31.789 }, 00:10:31.789 { 00:10:31.789 "name": "pt4", 00:10:31.789 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:31.789 "is_configured": true, 00:10:31.789 "data_offset": 2048, 00:10:31.789 "data_size": 63488 00:10:31.789 } 00:10:31.789 ] 00:10:31.789 }' 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.789 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.048 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.048 [2024-10-09 01:30:30.857766] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.049 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.049 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.049 "name": "raid_bdev1", 00:10:32.049 "aliases": [ 00:10:32.049 "8890a8dc-f34e-4820-a5cb-6199cc47f2b6" 00:10:32.049 ], 00:10:32.049 "product_name": "Raid Volume", 00:10:32.049 "block_size": 512, 00:10:32.049 "num_blocks": 253952, 00:10:32.049 "uuid": "8890a8dc-f34e-4820-a5cb-6199cc47f2b6", 00:10:32.049 "assigned_rate_limits": { 00:10:32.049 "rw_ios_per_sec": 0, 00:10:32.049 "rw_mbytes_per_sec": 0, 00:10:32.049 "r_mbytes_per_sec": 0, 00:10:32.049 "w_mbytes_per_sec": 0 00:10:32.049 }, 00:10:32.049 "claimed": false, 00:10:32.049 "zoned": false, 00:10:32.049 "supported_io_types": { 00:10:32.049 "read": true, 00:10:32.049 "write": true, 00:10:32.049 "unmap": true, 00:10:32.049 "flush": true, 00:10:32.049 "reset": true, 00:10:32.049 "nvme_admin": false, 00:10:32.049 "nvme_io": false, 00:10:32.049 "nvme_io_md": false, 00:10:32.049 "write_zeroes": true, 00:10:32.049 "zcopy": false, 00:10:32.049 "get_zone_info": false, 00:10:32.049 "zone_management": false, 00:10:32.049 "zone_append": false, 00:10:32.049 "compare": false, 00:10:32.049 "compare_and_write": false, 00:10:32.049 "abort": false, 00:10:32.049 "seek_hole": false, 00:10:32.049 "seek_data": false, 00:10:32.049 "copy": false, 00:10:32.049 "nvme_iov_md": false 00:10:32.049 }, 00:10:32.049 "memory_domains": [ 00:10:32.049 { 00:10:32.049 "dma_device_id": "system", 00:10:32.049 "dma_device_type": 1 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.049 "dma_device_type": 2 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "dma_device_id": "system", 00:10:32.049 "dma_device_type": 1 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.049 "dma_device_type": 2 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "dma_device_id": "system", 00:10:32.049 "dma_device_type": 1 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.049 "dma_device_type": 2 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "dma_device_id": "system", 00:10:32.049 "dma_device_type": 1 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.049 "dma_device_type": 2 00:10:32.049 } 00:10:32.049 ], 00:10:32.049 "driver_specific": { 00:10:32.049 "raid": { 00:10:32.049 "uuid": "8890a8dc-f34e-4820-a5cb-6199cc47f2b6", 00:10:32.049 "strip_size_kb": 64, 00:10:32.049 "state": "online", 00:10:32.049 "raid_level": "concat", 00:10:32.049 "superblock": true, 00:10:32.049 "num_base_bdevs": 4, 00:10:32.049 "num_base_bdevs_discovered": 4, 00:10:32.049 "num_base_bdevs_operational": 4, 00:10:32.049 "base_bdevs_list": [ 00:10:32.049 { 00:10:32.049 "name": "pt1", 00:10:32.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.049 "is_configured": true, 00:10:32.049 "data_offset": 2048, 00:10:32.049 "data_size": 63488 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "name": "pt2", 00:10:32.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.049 "is_configured": true, 00:10:32.049 "data_offset": 2048, 00:10:32.049 "data_size": 63488 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "name": "pt3", 00:10:32.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.049 "is_configured": true, 00:10:32.049 "data_offset": 2048, 00:10:32.049 "data_size": 63488 00:10:32.049 }, 00:10:32.049 { 00:10:32.049 "name": "pt4", 00:10:32.049 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:32.049 "is_configured": true, 00:10:32.049 "data_offset": 2048, 00:10:32.049 "data_size": 63488 00:10:32.049 } 00:10:32.049 ] 00:10:32.049 } 00:10:32.049 } 00:10:32.049 }' 00:10:32.049 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.308 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:32.308 pt2 00:10:32.308 pt3 00:10:32.308 pt4' 00:10:32.308 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.308 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.308 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.308 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.308 01:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:32.308 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.308 01:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.308 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 [2024-10-09 01:30:31.205808] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8890a8dc-f34e-4820-a5cb-6199cc47f2b6 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8890a8dc-f34e-4820-a5cb-6199cc47f2b6 ']' 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 [2024-10-09 01:30:31.249547] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.568 [2024-10-09 01:30:31.249614] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.568 [2024-10-09 01:30:31.249741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.568 [2024-10-09 01:30:31.249858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.568 [2024-10-09 01:30:31.249908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:32.568 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.569 [2024-10-09 01:30:31.413611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:32.569 [2024-10-09 01:30:31.415801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:32.569 [2024-10-09 01:30:31.415883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:32.569 [2024-10-09 01:30:31.415933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:32.569 [2024-10-09 01:30:31.415999] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:32.569 [2024-10-09 01:30:31.416072] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:32.569 [2024-10-09 01:30:31.416115] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:32.569 [2024-10-09 01:30:31.416149] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:32.569 [2024-10-09 01:30:31.416161] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.569 [2024-10-09 01:30:31.416171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:32.569 request: 00:10:32.569 { 00:10:32.569 "name": "raid_bdev1", 00:10:32.569 "raid_level": "concat", 00:10:32.569 "base_bdevs": [ 00:10:32.569 "malloc1", 00:10:32.569 "malloc2", 00:10:32.569 "malloc3", 00:10:32.569 "malloc4" 00:10:32.569 ], 00:10:32.569 "strip_size_kb": 64, 00:10:32.569 "superblock": false, 00:10:32.569 "method": "bdev_raid_create", 00:10:32.569 "req_id": 1 00:10:32.569 } 00:10:32.569 Got JSON-RPC error response 00:10:32.569 response: 00:10:32.569 { 00:10:32.569 "code": -17, 00:10:32.569 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:32.569 } 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.569 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.828 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:32.828 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:32.828 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:32.828 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.828 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.828 [2024-10-09 01:30:31.477613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:32.828 [2024-10-09 01:30:31.477701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.828 [2024-10-09 01:30:31.477735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:32.828 [2024-10-09 01:30:31.477765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.828 [2024-10-09 01:30:31.480188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.828 [2024-10-09 01:30:31.480269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:32.828 [2024-10-09 01:30:31.480363] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:32.828 [2024-10-09 01:30:31.480457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:32.828 pt1 00:10:32.828 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.828 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:32.828 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.829 "name": "raid_bdev1", 00:10:32.829 "uuid": "8890a8dc-f34e-4820-a5cb-6199cc47f2b6", 00:10:32.829 "strip_size_kb": 64, 00:10:32.829 "state": "configuring", 00:10:32.829 "raid_level": "concat", 00:10:32.829 "superblock": true, 00:10:32.829 "num_base_bdevs": 4, 00:10:32.829 "num_base_bdevs_discovered": 1, 00:10:32.829 "num_base_bdevs_operational": 4, 00:10:32.829 "base_bdevs_list": [ 00:10:32.829 { 00:10:32.829 "name": "pt1", 00:10:32.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:32.829 "is_configured": true, 00:10:32.829 "data_offset": 2048, 00:10:32.829 "data_size": 63488 00:10:32.829 }, 00:10:32.829 { 00:10:32.829 "name": null, 00:10:32.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.829 "is_configured": false, 00:10:32.829 "data_offset": 2048, 00:10:32.829 "data_size": 63488 00:10:32.829 }, 00:10:32.829 { 00:10:32.829 "name": null, 00:10:32.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:32.829 "is_configured": false, 00:10:32.829 "data_offset": 2048, 00:10:32.829 "data_size": 63488 00:10:32.829 }, 00:10:32.829 { 00:10:32.829 "name": null, 00:10:32.829 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:32.829 "is_configured": false, 00:10:32.829 "data_offset": 2048, 00:10:32.829 "data_size": 63488 00:10:32.829 } 00:10:32.829 ] 00:10:32.829 }' 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.829 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.088 [2024-10-09 01:30:31.889711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.088 [2024-10-09 01:30:31.889804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.088 [2024-10-09 01:30:31.889839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:33.088 [2024-10-09 01:30:31.889870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.088 [2024-10-09 01:30:31.890283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.088 [2024-10-09 01:30:31.890346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.088 [2024-10-09 01:30:31.890438] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:33.088 [2024-10-09 01:30:31.890490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.088 pt2 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.088 [2024-10-09 01:30:31.901726] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.088 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.088 "name": "raid_bdev1", 00:10:33.088 "uuid": "8890a8dc-f34e-4820-a5cb-6199cc47f2b6", 00:10:33.088 "strip_size_kb": 64, 00:10:33.088 "state": "configuring", 00:10:33.088 "raid_level": "concat", 00:10:33.088 "superblock": true, 00:10:33.089 "num_base_bdevs": 4, 00:10:33.089 "num_base_bdevs_discovered": 1, 00:10:33.089 "num_base_bdevs_operational": 4, 00:10:33.089 "base_bdevs_list": [ 00:10:33.089 { 00:10:33.089 "name": "pt1", 00:10:33.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.089 "is_configured": true, 00:10:33.089 "data_offset": 2048, 00:10:33.089 "data_size": 63488 00:10:33.089 }, 00:10:33.089 { 00:10:33.089 "name": null, 00:10:33.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.089 "is_configured": false, 00:10:33.089 "data_offset": 0, 00:10:33.089 "data_size": 63488 00:10:33.089 }, 00:10:33.089 { 00:10:33.089 "name": null, 00:10:33.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.089 "is_configured": false, 00:10:33.089 "data_offset": 2048, 00:10:33.089 "data_size": 63488 00:10:33.089 }, 00:10:33.089 { 00:10:33.089 "name": null, 00:10:33.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:33.089 "is_configured": false, 00:10:33.089 "data_offset": 2048, 00:10:33.089 "data_size": 63488 00:10:33.089 } 00:10:33.089 ] 00:10:33.089 }' 00:10:33.089 01:30:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.089 01:30:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.658 [2024-10-09 01:30:32.349907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:33.658 [2024-10-09 01:30:32.350003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.658 [2024-10-09 01:30:32.350044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:33.658 [2024-10-09 01:30:32.350073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.658 [2024-10-09 01:30:32.350517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.658 [2024-10-09 01:30:32.350580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:33.658 [2024-10-09 01:30:32.350684] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:33.658 [2024-10-09 01:30:32.350745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.658 pt2 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.658 [2024-10-09 01:30:32.361863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:33.658 [2024-10-09 01:30:32.361950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.658 [2024-10-09 01:30:32.361987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:33.658 [2024-10-09 01:30:32.362013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.658 [2024-10-09 01:30:32.362363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.658 [2024-10-09 01:30:32.362424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:33.658 [2024-10-09 01:30:32.362505] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:33.658 [2024-10-09 01:30:32.362584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:33.658 pt3 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.658 [2024-10-09 01:30:32.373848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:33.658 [2024-10-09 01:30:32.373926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.658 [2024-10-09 01:30:32.373961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:33.658 [2024-10-09 01:30:32.373987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.658 [2024-10-09 01:30:32.374324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.658 [2024-10-09 01:30:32.374383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:33.658 [2024-10-09 01:30:32.374462] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:33.658 [2024-10-09 01:30:32.374506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:33.658 [2024-10-09 01:30:32.374645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:33.658 [2024-10-09 01:30:32.374683] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:33.658 [2024-10-09 01:30:32.374951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:33.658 [2024-10-09 01:30:32.375106] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:33.658 [2024-10-09 01:30:32.375151] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:33.658 [2024-10-09 01:30:32.375278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.658 pt4 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.658 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.659 "name": "raid_bdev1", 00:10:33.659 "uuid": "8890a8dc-f34e-4820-a5cb-6199cc47f2b6", 00:10:33.659 "strip_size_kb": 64, 00:10:33.659 "state": "online", 00:10:33.659 "raid_level": "concat", 00:10:33.659 "superblock": true, 00:10:33.659 "num_base_bdevs": 4, 00:10:33.659 "num_base_bdevs_discovered": 4, 00:10:33.659 "num_base_bdevs_operational": 4, 00:10:33.659 "base_bdevs_list": [ 00:10:33.659 { 00:10:33.659 "name": "pt1", 00:10:33.659 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:33.659 "is_configured": true, 00:10:33.659 "data_offset": 2048, 00:10:33.659 "data_size": 63488 00:10:33.659 }, 00:10:33.659 { 00:10:33.659 "name": "pt2", 00:10:33.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.659 "is_configured": true, 00:10:33.659 "data_offset": 2048, 00:10:33.659 "data_size": 63488 00:10:33.659 }, 00:10:33.659 { 00:10:33.659 "name": "pt3", 00:10:33.659 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:33.659 "is_configured": true, 00:10:33.659 "data_offset": 2048, 00:10:33.659 "data_size": 63488 00:10:33.659 }, 00:10:33.659 { 00:10:33.659 "name": "pt4", 00:10:33.659 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:33.659 "is_configured": true, 00:10:33.659 "data_offset": 2048, 00:10:33.659 "data_size": 63488 00:10:33.659 } 00:10:33.659 ] 00:10:33.659 }' 00:10:33.659 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.659 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.918 [2024-10-09 01:30:32.794331] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.918 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.178 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.178 "name": "raid_bdev1", 00:10:34.178 "aliases": [ 00:10:34.178 "8890a8dc-f34e-4820-a5cb-6199cc47f2b6" 00:10:34.178 ], 00:10:34.178 "product_name": "Raid Volume", 00:10:34.178 "block_size": 512, 00:10:34.178 "num_blocks": 253952, 00:10:34.178 "uuid": "8890a8dc-f34e-4820-a5cb-6199cc47f2b6", 00:10:34.178 "assigned_rate_limits": { 00:10:34.178 "rw_ios_per_sec": 0, 00:10:34.178 "rw_mbytes_per_sec": 0, 00:10:34.178 "r_mbytes_per_sec": 0, 00:10:34.178 "w_mbytes_per_sec": 0 00:10:34.178 }, 00:10:34.178 "claimed": false, 00:10:34.178 "zoned": false, 00:10:34.178 "supported_io_types": { 00:10:34.178 "read": true, 00:10:34.178 "write": true, 00:10:34.178 "unmap": true, 00:10:34.178 "flush": true, 00:10:34.178 "reset": true, 00:10:34.178 "nvme_admin": false, 00:10:34.178 "nvme_io": false, 00:10:34.178 "nvme_io_md": false, 00:10:34.178 "write_zeroes": true, 00:10:34.178 "zcopy": false, 00:10:34.178 "get_zone_info": false, 00:10:34.178 "zone_management": false, 00:10:34.178 "zone_append": false, 00:10:34.178 "compare": false, 00:10:34.178 "compare_and_write": false, 00:10:34.178 "abort": false, 00:10:34.178 "seek_hole": false, 00:10:34.178 "seek_data": false, 00:10:34.178 "copy": false, 00:10:34.178 "nvme_iov_md": false 00:10:34.178 }, 00:10:34.178 "memory_domains": [ 00:10:34.178 { 00:10:34.178 "dma_device_id": "system", 00:10:34.178 "dma_device_type": 1 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.178 "dma_device_type": 2 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "dma_device_id": "system", 00:10:34.178 "dma_device_type": 1 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.178 "dma_device_type": 2 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "dma_device_id": "system", 00:10:34.178 "dma_device_type": 1 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.178 "dma_device_type": 2 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "dma_device_id": "system", 00:10:34.178 "dma_device_type": 1 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.178 "dma_device_type": 2 00:10:34.178 } 00:10:34.178 ], 00:10:34.178 "driver_specific": { 00:10:34.178 "raid": { 00:10:34.178 "uuid": "8890a8dc-f34e-4820-a5cb-6199cc47f2b6", 00:10:34.178 "strip_size_kb": 64, 00:10:34.178 "state": "online", 00:10:34.178 "raid_level": "concat", 00:10:34.178 "superblock": true, 00:10:34.178 "num_base_bdevs": 4, 00:10:34.178 "num_base_bdevs_discovered": 4, 00:10:34.178 "num_base_bdevs_operational": 4, 00:10:34.178 "base_bdevs_list": [ 00:10:34.178 { 00:10:34.178 "name": "pt1", 00:10:34.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:34.178 "is_configured": true, 00:10:34.178 "data_offset": 2048, 00:10:34.178 "data_size": 63488 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "name": "pt2", 00:10:34.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:34.178 "is_configured": true, 00:10:34.178 "data_offset": 2048, 00:10:34.178 "data_size": 63488 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "name": "pt3", 00:10:34.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:34.178 "is_configured": true, 00:10:34.178 "data_offset": 2048, 00:10:34.178 "data_size": 63488 00:10:34.178 }, 00:10:34.178 { 00:10:34.178 "name": "pt4", 00:10:34.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:34.178 "is_configured": true, 00:10:34.178 "data_offset": 2048, 00:10:34.178 "data_size": 63488 00:10:34.178 } 00:10:34.178 ] 00:10:34.178 } 00:10:34.178 } 00:10:34.178 }' 00:10:34.178 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.178 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:34.178 pt2 00:10:34.178 pt3 00:10:34.178 pt4' 00:10:34.178 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.178 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.178 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.179 01:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.179 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.439 [2024-10-09 01:30:33.106365] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8890a8dc-f34e-4820-a5cb-6199cc47f2b6 '!=' 8890a8dc-f34e-4820-a5cb-6199cc47f2b6 ']' 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84598 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84598 ']' 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84598 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84598 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84598' 00:10:34.439 killing process with pid 84598 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 84598 00:10:34.439 [2024-10-09 01:30:33.195790] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.439 [2024-10-09 01:30:33.195930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.439 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 84598 00:10:34.439 [2024-10-09 01:30:33.196043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.439 [2024-10-09 01:30:33.196054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:34.439 [2024-10-09 01:30:33.271423] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.010 01:30:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:35.010 00:10:35.010 real 0m4.267s 00:10:35.010 user 0m6.475s 00:10:35.010 sys 0m1.015s 00:10:35.010 ************************************ 00:10:35.010 END TEST raid_superblock_test 00:10:35.010 ************************************ 00:10:35.010 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.010 01:30:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.010 01:30:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:35.010 01:30:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:35.010 01:30:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.010 01:30:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.010 ************************************ 00:10:35.010 START TEST raid_read_error_test 00:10:35.010 ************************************ 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yDVf8fwacD 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84847 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84847 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 84847 ']' 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.010 01:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.010 [2024-10-09 01:30:33.811623] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:35.010 [2024-10-09 01:30:33.811774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84847 ] 00:10:35.271 [2024-10-09 01:30:33.948278] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:35.271 [2024-10-09 01:30:33.978828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.271 [2024-10-09 01:30:34.047544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.271 [2024-10-09 01:30:34.123096] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.271 [2024-10-09 01:30:34.123159] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.840 BaseBdev1_malloc 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.840 true 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.840 [2024-10-09 01:30:34.669708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:35.840 [2024-10-09 01:30:34.669810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.840 [2024-10-09 01:30:34.669851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:35.840 [2024-10-09 01:30:34.669895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.840 [2024-10-09 01:30:34.672221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.840 [2024-10-09 01:30:34.672294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:35.840 BaseBdev1 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.840 BaseBdev2_malloc 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.840 true 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.840 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.840 [2024-10-09 01:30:34.718984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:35.840 [2024-10-09 01:30:34.719083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:35.840 [2024-10-09 01:30:34.719120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:35.840 [2024-10-09 01:30:34.719153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:35.840 [2024-10-09 01:30:34.721619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:35.840 [2024-10-09 01:30:34.721695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:35.840 BaseBdev2 00:10:35.841 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.841 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:35.841 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:35.841 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.841 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.101 BaseBdev3_malloc 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.101 true 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.101 [2024-10-09 01:30:34.765738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:36.101 [2024-10-09 01:30:34.765833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.101 [2024-10-09 01:30:34.765874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:36.101 [2024-10-09 01:30:34.765909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.101 [2024-10-09 01:30:34.768218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.101 [2024-10-09 01:30:34.768292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:36.101 BaseBdev3 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.101 BaseBdev4_malloc 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.101 true 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.101 [2024-10-09 01:30:34.812244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:36.101 [2024-10-09 01:30:34.812338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.101 [2024-10-09 01:30:34.812359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:36.101 [2024-10-09 01:30:34.812370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.101 [2024-10-09 01:30:34.814676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.101 [2024-10-09 01:30:34.814753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:36.101 BaseBdev4 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.101 [2024-10-09 01:30:34.824325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.101 [2024-10-09 01:30:34.826442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.101 [2024-10-09 01:30:34.826565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.101 [2024-10-09 01:30:34.826650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.101 [2024-10-09 01:30:34.826880] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:36.101 [2024-10-09 01:30:34.826928] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:36.101 [2024-10-09 01:30:34.827183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:36.101 [2024-10-09 01:30:34.827358] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:36.101 [2024-10-09 01:30:34.827398] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:36.101 [2024-10-09 01:30:34.827574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.101 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.101 "name": "raid_bdev1", 00:10:36.101 "uuid": "68a5a12d-7198-4c51-acda-ee08de4f8892", 00:10:36.101 "strip_size_kb": 64, 00:10:36.101 "state": "online", 00:10:36.101 "raid_level": "concat", 00:10:36.101 "superblock": true, 00:10:36.101 "num_base_bdevs": 4, 00:10:36.101 "num_base_bdevs_discovered": 4, 00:10:36.101 "num_base_bdevs_operational": 4, 00:10:36.101 "base_bdevs_list": [ 00:10:36.101 { 00:10:36.101 "name": "BaseBdev1", 00:10:36.101 "uuid": "d9577646-c750-58d5-a716-06826a0fe9cc", 00:10:36.101 "is_configured": true, 00:10:36.101 "data_offset": 2048, 00:10:36.101 "data_size": 63488 00:10:36.101 }, 00:10:36.101 { 00:10:36.101 "name": "BaseBdev2", 00:10:36.101 "uuid": "ec6bc776-2e55-5516-be5b-06a36526a7b5", 00:10:36.101 "is_configured": true, 00:10:36.102 "data_offset": 2048, 00:10:36.102 "data_size": 63488 00:10:36.102 }, 00:10:36.102 { 00:10:36.102 "name": "BaseBdev3", 00:10:36.102 "uuid": "f2951927-24ab-5b52-a6fd-0de76b729c0f", 00:10:36.102 "is_configured": true, 00:10:36.102 "data_offset": 2048, 00:10:36.102 "data_size": 63488 00:10:36.102 }, 00:10:36.102 { 00:10:36.102 "name": "BaseBdev4", 00:10:36.102 "uuid": "287d21ee-7207-504f-bb10-ed8db3452f39", 00:10:36.102 "is_configured": true, 00:10:36.102 "data_offset": 2048, 00:10:36.102 "data_size": 63488 00:10:36.102 } 00:10:36.102 ] 00:10:36.102 }' 00:10:36.102 01:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.102 01:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.387 01:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:36.387 01:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:36.647 [2024-10-09 01:30:35.344889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:37.586 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:37.586 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.586 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.586 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.586 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.587 "name": "raid_bdev1", 00:10:37.587 "uuid": "68a5a12d-7198-4c51-acda-ee08de4f8892", 00:10:37.587 "strip_size_kb": 64, 00:10:37.587 "state": "online", 00:10:37.587 "raid_level": "concat", 00:10:37.587 "superblock": true, 00:10:37.587 "num_base_bdevs": 4, 00:10:37.587 "num_base_bdevs_discovered": 4, 00:10:37.587 "num_base_bdevs_operational": 4, 00:10:37.587 "base_bdevs_list": [ 00:10:37.587 { 00:10:37.587 "name": "BaseBdev1", 00:10:37.587 "uuid": "d9577646-c750-58d5-a716-06826a0fe9cc", 00:10:37.587 "is_configured": true, 00:10:37.587 "data_offset": 2048, 00:10:37.587 "data_size": 63488 00:10:37.587 }, 00:10:37.587 { 00:10:37.587 "name": "BaseBdev2", 00:10:37.587 "uuid": "ec6bc776-2e55-5516-be5b-06a36526a7b5", 00:10:37.587 "is_configured": true, 00:10:37.587 "data_offset": 2048, 00:10:37.587 "data_size": 63488 00:10:37.587 }, 00:10:37.587 { 00:10:37.587 "name": "BaseBdev3", 00:10:37.587 "uuid": "f2951927-24ab-5b52-a6fd-0de76b729c0f", 00:10:37.587 "is_configured": true, 00:10:37.587 "data_offset": 2048, 00:10:37.587 "data_size": 63488 00:10:37.587 }, 00:10:37.587 { 00:10:37.587 "name": "BaseBdev4", 00:10:37.587 "uuid": "287d21ee-7207-504f-bb10-ed8db3452f39", 00:10:37.587 "is_configured": true, 00:10:37.587 "data_offset": 2048, 00:10:37.587 "data_size": 63488 00:10:37.587 } 00:10:37.587 ] 00:10:37.587 }' 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.587 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.847 [2024-10-09 01:30:36.712479] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.847 [2024-10-09 01:30:36.712597] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.847 [2024-10-09 01:30:36.715055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.847 [2024-10-09 01:30:36.715166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.847 [2024-10-09 01:30:36.715238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.847 [2024-10-09 01:30:36.715296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:37.847 { 00:10:37.847 "results": [ 00:10:37.847 { 00:10:37.847 "job": "raid_bdev1", 00:10:37.847 "core_mask": "0x1", 00:10:37.847 "workload": "randrw", 00:10:37.847 "percentage": 50, 00:10:37.847 "status": "finished", 00:10:37.847 "queue_depth": 1, 00:10:37.847 "io_size": 131072, 00:10:37.847 "runtime": 1.36553, 00:10:37.847 "iops": 14929.001925992106, 00:10:37.847 "mibps": 1866.1252407490133, 00:10:37.847 "io_failed": 1, 00:10:37.847 "io_timeout": 0, 00:10:37.847 "avg_latency_us": 94.21995585627299, 00:10:37.847 "min_latency_us": 23.986751503530026, 00:10:37.847 "max_latency_us": 1335.2253116011505 00:10:37.847 } 00:10:37.847 ], 00:10:37.847 "core_count": 1 00:10:37.847 } 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84847 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 84847 ']' 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 84847 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.847 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84847 00:10:38.107 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:38.107 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:38.107 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84847' 00:10:38.107 killing process with pid 84847 00:10:38.107 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 84847 00:10:38.107 [2024-10-09 01:30:36.749565] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.107 01:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 84847 00:10:38.107 [2024-10-09 01:30:36.813457] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yDVf8fwacD 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:38.368 00:10:38.368 real 0m3.489s 00:10:38.368 user 0m4.226s 00:10:38.368 sys 0m0.651s 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.368 01:30:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.368 ************************************ 00:10:38.368 END TEST raid_read_error_test 00:10:38.368 ************************************ 00:10:38.368 01:30:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:38.368 01:30:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:38.368 01:30:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.368 01:30:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.629 ************************************ 00:10:38.629 START TEST raid_write_error_test 00:10:38.629 ************************************ 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.o10JQn4znm 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84983 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84983 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84983 ']' 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.629 01:30:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.629 [2024-10-09 01:30:37.376309] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:38.629 [2024-10-09 01:30:37.376530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84983 ] 00:10:38.629 [2024-10-09 01:30:37.512376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:38.889 [2024-10-09 01:30:37.533482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.889 [2024-10-09 01:30:37.602851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.889 [2024-10-09 01:30:37.678720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.889 [2024-10-09 01:30:37.678772] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 BaseBdev1_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 true 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 [2024-10-09 01:30:38.238138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:39.459 [2024-10-09 01:30:38.238240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.459 [2024-10-09 01:30:38.238263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:39.459 [2024-10-09 01:30:38.238278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.459 [2024-10-09 01:30:38.240722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.459 [2024-10-09 01:30:38.240762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:39.459 BaseBdev1 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 BaseBdev2_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 true 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 [2024-10-09 01:30:38.303324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:39.459 [2024-10-09 01:30:38.303405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.459 [2024-10-09 01:30:38.303433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:39.459 [2024-10-09 01:30:38.303451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.459 [2024-10-09 01:30:38.306913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.459 [2024-10-09 01:30:38.306962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:39.459 BaseBdev2 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 BaseBdev3_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.459 true 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.459 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 [2024-10-09 01:30:38.351067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:39.719 [2024-10-09 01:30:38.351120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.719 [2024-10-09 01:30:38.351137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:39.719 [2024-10-09 01:30:38.351149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.719 [2024-10-09 01:30:38.353526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.719 [2024-10-09 01:30:38.353658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:39.719 BaseBdev3 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 BaseBdev4_malloc 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 true 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 [2024-10-09 01:30:38.397692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:39.719 [2024-10-09 01:30:38.397749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.719 [2024-10-09 01:30:38.397766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:39.719 [2024-10-09 01:30:38.397777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.719 [2024-10-09 01:30:38.400113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.719 [2024-10-09 01:30:38.400170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:39.719 BaseBdev4 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 [2024-10-09 01:30:38.409784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.719 [2024-10-09 01:30:38.411894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.719 [2024-10-09 01:30:38.411974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.719 [2024-10-09 01:30:38.412033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.719 [2024-10-09 01:30:38.412233] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:39.719 [2024-10-09 01:30:38.412246] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.719 [2024-10-09 01:30:38.412528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:39.719 [2024-10-09 01:30:38.412660] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:39.719 [2024-10-09 01:30:38.412669] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:39.719 [2024-10-09 01:30:38.412818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.719 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.719 "name": "raid_bdev1", 00:10:39.719 "uuid": "1eef0881-8ad8-44be-a147-961502232492", 00:10:39.719 "strip_size_kb": 64, 00:10:39.719 "state": "online", 00:10:39.719 "raid_level": "concat", 00:10:39.719 "superblock": true, 00:10:39.719 "num_base_bdevs": 4, 00:10:39.719 "num_base_bdevs_discovered": 4, 00:10:39.720 "num_base_bdevs_operational": 4, 00:10:39.720 "base_bdevs_list": [ 00:10:39.720 { 00:10:39.720 "name": "BaseBdev1", 00:10:39.720 "uuid": "34715785-df37-5255-ad78-faa6b5a07348", 00:10:39.720 "is_configured": true, 00:10:39.720 "data_offset": 2048, 00:10:39.720 "data_size": 63488 00:10:39.720 }, 00:10:39.720 { 00:10:39.720 "name": "BaseBdev2", 00:10:39.720 "uuid": "aa0efd2c-e0b3-5b98-b50f-cd2427b422b6", 00:10:39.720 "is_configured": true, 00:10:39.720 "data_offset": 2048, 00:10:39.720 "data_size": 63488 00:10:39.720 }, 00:10:39.720 { 00:10:39.720 "name": "BaseBdev3", 00:10:39.720 "uuid": "e2a7129b-6e38-5fb4-ac15-4c8eff1ee2be", 00:10:39.720 "is_configured": true, 00:10:39.720 "data_offset": 2048, 00:10:39.720 "data_size": 63488 00:10:39.720 }, 00:10:39.720 { 00:10:39.720 "name": "BaseBdev4", 00:10:39.720 "uuid": "3f2b15ab-5b48-5a94-981d-ebaeca4ae41a", 00:10:39.720 "is_configured": true, 00:10:39.720 "data_offset": 2048, 00:10:39.720 "data_size": 63488 00:10:39.720 } 00:10:39.720 ] 00:10:39.720 }' 00:10:39.720 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.720 01:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.979 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:39.979 01:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:40.239 [2024-10-09 01:30:38.930372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.178 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.179 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.179 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.179 01:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.179 01:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.179 01:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.179 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.179 "name": "raid_bdev1", 00:10:41.179 "uuid": "1eef0881-8ad8-44be-a147-961502232492", 00:10:41.179 "strip_size_kb": 64, 00:10:41.179 "state": "online", 00:10:41.179 "raid_level": "concat", 00:10:41.179 "superblock": true, 00:10:41.179 "num_base_bdevs": 4, 00:10:41.179 "num_base_bdevs_discovered": 4, 00:10:41.179 "num_base_bdevs_operational": 4, 00:10:41.179 "base_bdevs_list": [ 00:10:41.179 { 00:10:41.179 "name": "BaseBdev1", 00:10:41.179 "uuid": "34715785-df37-5255-ad78-faa6b5a07348", 00:10:41.179 "is_configured": true, 00:10:41.179 "data_offset": 2048, 00:10:41.179 "data_size": 63488 00:10:41.179 }, 00:10:41.179 { 00:10:41.179 "name": "BaseBdev2", 00:10:41.179 "uuid": "aa0efd2c-e0b3-5b98-b50f-cd2427b422b6", 00:10:41.179 "is_configured": true, 00:10:41.179 "data_offset": 2048, 00:10:41.179 "data_size": 63488 00:10:41.179 }, 00:10:41.179 { 00:10:41.179 "name": "BaseBdev3", 00:10:41.179 "uuid": "e2a7129b-6e38-5fb4-ac15-4c8eff1ee2be", 00:10:41.179 "is_configured": true, 00:10:41.179 "data_offset": 2048, 00:10:41.179 "data_size": 63488 00:10:41.179 }, 00:10:41.179 { 00:10:41.179 "name": "BaseBdev4", 00:10:41.179 "uuid": "3f2b15ab-5b48-5a94-981d-ebaeca4ae41a", 00:10:41.179 "is_configured": true, 00:10:41.179 "data_offset": 2048, 00:10:41.179 "data_size": 63488 00:10:41.179 } 00:10:41.179 ] 00:10:41.179 }' 00:10:41.179 01:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.179 01:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.438 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:41.438 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.438 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.438 [2024-10-09 01:30:40.301831] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:41.438 [2024-10-09 01:30:40.301927] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.438 [2024-10-09 01:30:40.304473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.438 [2024-10-09 01:30:40.304607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.438 [2024-10-09 01:30:40.304683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.438 [2024-10-09 01:30:40.304760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:41.438 { 00:10:41.438 "results": [ 00:10:41.438 { 00:10:41.438 "job": "raid_bdev1", 00:10:41.439 "core_mask": "0x1", 00:10:41.439 "workload": "randrw", 00:10:41.439 "percentage": 50, 00:10:41.439 "status": "finished", 00:10:41.439 "queue_depth": 1, 00:10:41.439 "io_size": 131072, 00:10:41.439 "runtime": 1.369317, 00:10:41.439 "iops": 14921.307483950028, 00:10:41.439 "mibps": 1865.1634354937535, 00:10:41.439 "io_failed": 1, 00:10:41.439 "io_timeout": 0, 00:10:41.439 "avg_latency_us": 94.26019999188748, 00:10:41.439 "min_latency_us": 24.20988407565589, 00:10:41.439 "max_latency_us": 1378.0667654493159 00:10:41.439 } 00:10:41.439 ], 00:10:41.439 "core_count": 1 00:10:41.439 } 00:10:41.439 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.439 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84983 00:10:41.439 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84983 ']' 00:10:41.439 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84983 00:10:41.439 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:41.439 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.439 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84983 00:10:41.698 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.698 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.698 killing process with pid 84983 00:10:41.698 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84983' 00:10:41.698 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84983 00:10:41.698 [2024-10-09 01:30:40.353393] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.698 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84983 00:10:41.698 [2024-10-09 01:30:40.415327] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.o10JQn4znm 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.958 ************************************ 00:10:41.958 END TEST raid_write_error_test 00:10:41.958 ************************************ 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:41.958 00:10:41.958 real 0m3.525s 00:10:41.958 user 0m4.262s 00:10:41.958 sys 0m0.654s 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.958 01:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.218 01:30:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:42.218 01:30:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:42.218 01:30:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:42.218 01:30:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.218 01:30:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.218 ************************************ 00:10:42.218 START TEST raid_state_function_test 00:10:42.218 ************************************ 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:42.218 Process raid pid: 85110 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=85110 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85110' 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 85110 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 85110 ']' 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.218 01:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.218 [2024-10-09 01:30:40.964183] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:42.218 [2024-10-09 01:30:40.964317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.218 [2024-10-09 01:30:41.101378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:42.477 [2024-10-09 01:30:41.125244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.477 [2024-10-09 01:30:41.194818] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.477 [2024-10-09 01:30:41.270349] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.477 [2024-10-09 01:30:41.270479] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.046 [2024-10-09 01:30:41.794798] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.046 [2024-10-09 01:30:41.794854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.046 [2024-10-09 01:30:41.794867] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.046 [2024-10-09 01:30:41.794875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.046 [2024-10-09 01:30:41.794889] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.046 [2024-10-09 01:30:41.794896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.046 [2024-10-09 01:30:41.794903] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.046 [2024-10-09 01:30:41.794910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.046 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.047 "name": "Existed_Raid", 00:10:43.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.047 "strip_size_kb": 0, 00:10:43.047 "state": "configuring", 00:10:43.047 "raid_level": "raid1", 00:10:43.047 "superblock": false, 00:10:43.047 "num_base_bdevs": 4, 00:10:43.047 "num_base_bdevs_discovered": 0, 00:10:43.047 "num_base_bdevs_operational": 4, 00:10:43.047 "base_bdevs_list": [ 00:10:43.047 { 00:10:43.047 "name": "BaseBdev1", 00:10:43.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.047 "is_configured": false, 00:10:43.047 "data_offset": 0, 00:10:43.047 "data_size": 0 00:10:43.047 }, 00:10:43.047 { 00:10:43.047 "name": "BaseBdev2", 00:10:43.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.047 "is_configured": false, 00:10:43.047 "data_offset": 0, 00:10:43.047 "data_size": 0 00:10:43.047 }, 00:10:43.047 { 00:10:43.047 "name": "BaseBdev3", 00:10:43.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.047 "is_configured": false, 00:10:43.047 "data_offset": 0, 00:10:43.047 "data_size": 0 00:10:43.047 }, 00:10:43.047 { 00:10:43.047 "name": "BaseBdev4", 00:10:43.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.047 "is_configured": false, 00:10:43.047 "data_offset": 0, 00:10:43.047 "data_size": 0 00:10:43.047 } 00:10:43.047 ] 00:10:43.047 }' 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.047 01:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 [2024-10-09 01:30:42.294803] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.615 [2024-10-09 01:30:42.294850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 [2024-10-09 01:30:42.306823] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.615 [2024-10-09 01:30:42.306863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.615 [2024-10-09 01:30:42.306874] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.615 [2024-10-09 01:30:42.306881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.615 [2024-10-09 01:30:42.306889] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:43.615 [2024-10-09 01:30:42.306896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:43.615 [2024-10-09 01:30:42.306904] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:43.615 [2024-10-09 01:30:42.306911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 [2024-10-09 01:30:42.333980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.615 BaseBdev1 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 [ 00:10:43.615 { 00:10:43.615 "name": "BaseBdev1", 00:10:43.615 "aliases": [ 00:10:43.615 "829df708-2b02-4b50-9e0a-0f7302847bc8" 00:10:43.615 ], 00:10:43.615 "product_name": "Malloc disk", 00:10:43.615 "block_size": 512, 00:10:43.615 "num_blocks": 65536, 00:10:43.615 "uuid": "829df708-2b02-4b50-9e0a-0f7302847bc8", 00:10:43.615 "assigned_rate_limits": { 00:10:43.615 "rw_ios_per_sec": 0, 00:10:43.615 "rw_mbytes_per_sec": 0, 00:10:43.615 "r_mbytes_per_sec": 0, 00:10:43.615 "w_mbytes_per_sec": 0 00:10:43.615 }, 00:10:43.615 "claimed": true, 00:10:43.615 "claim_type": "exclusive_write", 00:10:43.615 "zoned": false, 00:10:43.615 "supported_io_types": { 00:10:43.615 "read": true, 00:10:43.615 "write": true, 00:10:43.615 "unmap": true, 00:10:43.615 "flush": true, 00:10:43.615 "reset": true, 00:10:43.615 "nvme_admin": false, 00:10:43.615 "nvme_io": false, 00:10:43.615 "nvme_io_md": false, 00:10:43.615 "write_zeroes": true, 00:10:43.615 "zcopy": true, 00:10:43.615 "get_zone_info": false, 00:10:43.615 "zone_management": false, 00:10:43.615 "zone_append": false, 00:10:43.615 "compare": false, 00:10:43.615 "compare_and_write": false, 00:10:43.615 "abort": true, 00:10:43.615 "seek_hole": false, 00:10:43.615 "seek_data": false, 00:10:43.615 "copy": true, 00:10:43.615 "nvme_iov_md": false 00:10:43.615 }, 00:10:43.615 "memory_domains": [ 00:10:43.615 { 00:10:43.615 "dma_device_id": "system", 00:10:43.615 "dma_device_type": 1 00:10:43.615 }, 00:10:43.615 { 00:10:43.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.615 "dma_device_type": 2 00:10:43.615 } 00:10:43.615 ], 00:10:43.615 "driver_specific": {} 00:10:43.615 } 00:10:43.615 ] 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.615 "name": "Existed_Raid", 00:10:43.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.615 "strip_size_kb": 0, 00:10:43.615 "state": "configuring", 00:10:43.615 "raid_level": "raid1", 00:10:43.615 "superblock": false, 00:10:43.615 "num_base_bdevs": 4, 00:10:43.615 "num_base_bdevs_discovered": 1, 00:10:43.615 "num_base_bdevs_operational": 4, 00:10:43.615 "base_bdevs_list": [ 00:10:43.615 { 00:10:43.615 "name": "BaseBdev1", 00:10:43.615 "uuid": "829df708-2b02-4b50-9e0a-0f7302847bc8", 00:10:43.615 "is_configured": true, 00:10:43.615 "data_offset": 0, 00:10:43.615 "data_size": 65536 00:10:43.615 }, 00:10:43.615 { 00:10:43.615 "name": "BaseBdev2", 00:10:43.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.615 "is_configured": false, 00:10:43.615 "data_offset": 0, 00:10:43.615 "data_size": 0 00:10:43.615 }, 00:10:43.615 { 00:10:43.615 "name": "BaseBdev3", 00:10:43.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.615 "is_configured": false, 00:10:43.615 "data_offset": 0, 00:10:43.615 "data_size": 0 00:10:43.615 }, 00:10:43.615 { 00:10:43.615 "name": "BaseBdev4", 00:10:43.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.615 "is_configured": false, 00:10:43.615 "data_offset": 0, 00:10:43.615 "data_size": 0 00:10:43.615 } 00:10:43.615 ] 00:10:43.615 }' 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.615 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.875 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.875 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.875 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.135 [2024-10-09 01:30:42.770119] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.135 [2024-10-09 01:30:42.770182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.135 [2024-10-09 01:30:42.782141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.135 [2024-10-09 01:30:42.784321] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.135 [2024-10-09 01:30:42.784400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.135 [2024-10-09 01:30:42.784416] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.135 [2024-10-09 01:30:42.784424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.135 [2024-10-09 01:30:42.784437] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.135 [2024-10-09 01:30:42.784444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.135 "name": "Existed_Raid", 00:10:44.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.135 "strip_size_kb": 0, 00:10:44.135 "state": "configuring", 00:10:44.135 "raid_level": "raid1", 00:10:44.135 "superblock": false, 00:10:44.135 "num_base_bdevs": 4, 00:10:44.135 "num_base_bdevs_discovered": 1, 00:10:44.135 "num_base_bdevs_operational": 4, 00:10:44.135 "base_bdevs_list": [ 00:10:44.135 { 00:10:44.135 "name": "BaseBdev1", 00:10:44.135 "uuid": "829df708-2b02-4b50-9e0a-0f7302847bc8", 00:10:44.135 "is_configured": true, 00:10:44.135 "data_offset": 0, 00:10:44.135 "data_size": 65536 00:10:44.135 }, 00:10:44.135 { 00:10:44.135 "name": "BaseBdev2", 00:10:44.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.135 "is_configured": false, 00:10:44.135 "data_offset": 0, 00:10:44.135 "data_size": 0 00:10:44.135 }, 00:10:44.135 { 00:10:44.135 "name": "BaseBdev3", 00:10:44.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.135 "is_configured": false, 00:10:44.135 "data_offset": 0, 00:10:44.135 "data_size": 0 00:10:44.135 }, 00:10:44.135 { 00:10:44.135 "name": "BaseBdev4", 00:10:44.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.135 "is_configured": false, 00:10:44.135 "data_offset": 0, 00:10:44.135 "data_size": 0 00:10:44.135 } 00:10:44.135 ] 00:10:44.135 }' 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.135 01:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.395 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.395 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.395 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.396 [2024-10-09 01:30:43.242591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.396 BaseBdev2 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.396 [ 00:10:44.396 { 00:10:44.396 "name": "BaseBdev2", 00:10:44.396 "aliases": [ 00:10:44.396 "fa0e5370-2b66-42c3-9689-8fff92b69fe0" 00:10:44.396 ], 00:10:44.396 "product_name": "Malloc disk", 00:10:44.396 "block_size": 512, 00:10:44.396 "num_blocks": 65536, 00:10:44.396 "uuid": "fa0e5370-2b66-42c3-9689-8fff92b69fe0", 00:10:44.396 "assigned_rate_limits": { 00:10:44.396 "rw_ios_per_sec": 0, 00:10:44.396 "rw_mbytes_per_sec": 0, 00:10:44.396 "r_mbytes_per_sec": 0, 00:10:44.396 "w_mbytes_per_sec": 0 00:10:44.396 }, 00:10:44.396 "claimed": true, 00:10:44.396 "claim_type": "exclusive_write", 00:10:44.396 "zoned": false, 00:10:44.396 "supported_io_types": { 00:10:44.396 "read": true, 00:10:44.396 "write": true, 00:10:44.396 "unmap": true, 00:10:44.396 "flush": true, 00:10:44.396 "reset": true, 00:10:44.396 "nvme_admin": false, 00:10:44.396 "nvme_io": false, 00:10:44.396 "nvme_io_md": false, 00:10:44.396 "write_zeroes": true, 00:10:44.396 "zcopy": true, 00:10:44.396 "get_zone_info": false, 00:10:44.396 "zone_management": false, 00:10:44.396 "zone_append": false, 00:10:44.396 "compare": false, 00:10:44.396 "compare_and_write": false, 00:10:44.396 "abort": true, 00:10:44.396 "seek_hole": false, 00:10:44.396 "seek_data": false, 00:10:44.396 "copy": true, 00:10:44.396 "nvme_iov_md": false 00:10:44.396 }, 00:10:44.396 "memory_domains": [ 00:10:44.396 { 00:10:44.396 "dma_device_id": "system", 00:10:44.396 "dma_device_type": 1 00:10:44.396 }, 00:10:44.396 { 00:10:44.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.396 "dma_device_type": 2 00:10:44.396 } 00:10:44.396 ], 00:10:44.396 "driver_specific": {} 00:10:44.396 } 00:10:44.396 ] 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.396 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.655 "name": "Existed_Raid", 00:10:44.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.655 "strip_size_kb": 0, 00:10:44.655 "state": "configuring", 00:10:44.655 "raid_level": "raid1", 00:10:44.655 "superblock": false, 00:10:44.655 "num_base_bdevs": 4, 00:10:44.655 "num_base_bdevs_discovered": 2, 00:10:44.655 "num_base_bdevs_operational": 4, 00:10:44.655 "base_bdevs_list": [ 00:10:44.655 { 00:10:44.655 "name": "BaseBdev1", 00:10:44.655 "uuid": "829df708-2b02-4b50-9e0a-0f7302847bc8", 00:10:44.655 "is_configured": true, 00:10:44.655 "data_offset": 0, 00:10:44.655 "data_size": 65536 00:10:44.655 }, 00:10:44.655 { 00:10:44.655 "name": "BaseBdev2", 00:10:44.655 "uuid": "fa0e5370-2b66-42c3-9689-8fff92b69fe0", 00:10:44.655 "is_configured": true, 00:10:44.655 "data_offset": 0, 00:10:44.655 "data_size": 65536 00:10:44.655 }, 00:10:44.655 { 00:10:44.655 "name": "BaseBdev3", 00:10:44.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.655 "is_configured": false, 00:10:44.655 "data_offset": 0, 00:10:44.655 "data_size": 0 00:10:44.655 }, 00:10:44.655 { 00:10:44.655 "name": "BaseBdev4", 00:10:44.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.655 "is_configured": false, 00:10:44.655 "data_offset": 0, 00:10:44.655 "data_size": 0 00:10:44.655 } 00:10:44.655 ] 00:10:44.655 }' 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.655 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.915 [2024-10-09 01:30:43.739278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.915 BaseBdev3 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.915 [ 00:10:44.915 { 00:10:44.915 "name": "BaseBdev3", 00:10:44.915 "aliases": [ 00:10:44.915 "776f933a-93b3-4cb0-b17f-f7c6d55fbbc5" 00:10:44.915 ], 00:10:44.915 "product_name": "Malloc disk", 00:10:44.915 "block_size": 512, 00:10:44.915 "num_blocks": 65536, 00:10:44.915 "uuid": "776f933a-93b3-4cb0-b17f-f7c6d55fbbc5", 00:10:44.915 "assigned_rate_limits": { 00:10:44.915 "rw_ios_per_sec": 0, 00:10:44.915 "rw_mbytes_per_sec": 0, 00:10:44.915 "r_mbytes_per_sec": 0, 00:10:44.915 "w_mbytes_per_sec": 0 00:10:44.915 }, 00:10:44.915 "claimed": true, 00:10:44.915 "claim_type": "exclusive_write", 00:10:44.915 "zoned": false, 00:10:44.915 "supported_io_types": { 00:10:44.915 "read": true, 00:10:44.915 "write": true, 00:10:44.915 "unmap": true, 00:10:44.915 "flush": true, 00:10:44.915 "reset": true, 00:10:44.915 "nvme_admin": false, 00:10:44.915 "nvme_io": false, 00:10:44.915 "nvme_io_md": false, 00:10:44.915 "write_zeroes": true, 00:10:44.915 "zcopy": true, 00:10:44.915 "get_zone_info": false, 00:10:44.915 "zone_management": false, 00:10:44.915 "zone_append": false, 00:10:44.915 "compare": false, 00:10:44.915 "compare_and_write": false, 00:10:44.915 "abort": true, 00:10:44.915 "seek_hole": false, 00:10:44.915 "seek_data": false, 00:10:44.915 "copy": true, 00:10:44.915 "nvme_iov_md": false 00:10:44.915 }, 00:10:44.915 "memory_domains": [ 00:10:44.915 { 00:10:44.915 "dma_device_id": "system", 00:10:44.915 "dma_device_type": 1 00:10:44.915 }, 00:10:44.915 { 00:10:44.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.915 "dma_device_type": 2 00:10:44.915 } 00:10:44.915 ], 00:10:44.915 "driver_specific": {} 00:10:44.915 } 00:10:44.915 ] 00:10:44.915 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.916 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.175 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.175 "name": "Existed_Raid", 00:10:45.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.175 "strip_size_kb": 0, 00:10:45.175 "state": "configuring", 00:10:45.175 "raid_level": "raid1", 00:10:45.175 "superblock": false, 00:10:45.175 "num_base_bdevs": 4, 00:10:45.175 "num_base_bdevs_discovered": 3, 00:10:45.175 "num_base_bdevs_operational": 4, 00:10:45.175 "base_bdevs_list": [ 00:10:45.175 { 00:10:45.175 "name": "BaseBdev1", 00:10:45.175 "uuid": "829df708-2b02-4b50-9e0a-0f7302847bc8", 00:10:45.175 "is_configured": true, 00:10:45.175 "data_offset": 0, 00:10:45.175 "data_size": 65536 00:10:45.175 }, 00:10:45.175 { 00:10:45.175 "name": "BaseBdev2", 00:10:45.175 "uuid": "fa0e5370-2b66-42c3-9689-8fff92b69fe0", 00:10:45.175 "is_configured": true, 00:10:45.175 "data_offset": 0, 00:10:45.175 "data_size": 65536 00:10:45.175 }, 00:10:45.175 { 00:10:45.175 "name": "BaseBdev3", 00:10:45.175 "uuid": "776f933a-93b3-4cb0-b17f-f7c6d55fbbc5", 00:10:45.175 "is_configured": true, 00:10:45.175 "data_offset": 0, 00:10:45.175 "data_size": 65536 00:10:45.175 }, 00:10:45.175 { 00:10:45.175 "name": "BaseBdev4", 00:10:45.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.175 "is_configured": false, 00:10:45.175 "data_offset": 0, 00:10:45.175 "data_size": 0 00:10:45.175 } 00:10:45.175 ] 00:10:45.175 }' 00:10:45.175 01:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.175 01:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.441 [2024-10-09 01:30:44.212221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:45.441 [2024-10-09 01:30:44.212274] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.441 [2024-10-09 01:30:44.212301] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:45.441 [2024-10-09 01:30:44.212673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:45.441 [2024-10-09 01:30:44.212846] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.441 [2024-10-09 01:30:44.212864] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:45.441 [2024-10-09 01:30:44.213119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.441 BaseBdev4 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.441 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.441 [ 00:10:45.441 { 00:10:45.441 "name": "BaseBdev4", 00:10:45.441 "aliases": [ 00:10:45.441 "38a3eb66-4e1c-42d7-8741-b67361588671" 00:10:45.441 ], 00:10:45.441 "product_name": "Malloc disk", 00:10:45.441 "block_size": 512, 00:10:45.441 "num_blocks": 65536, 00:10:45.441 "uuid": "38a3eb66-4e1c-42d7-8741-b67361588671", 00:10:45.441 "assigned_rate_limits": { 00:10:45.441 "rw_ios_per_sec": 0, 00:10:45.441 "rw_mbytes_per_sec": 0, 00:10:45.441 "r_mbytes_per_sec": 0, 00:10:45.441 "w_mbytes_per_sec": 0 00:10:45.441 }, 00:10:45.441 "claimed": true, 00:10:45.441 "claim_type": "exclusive_write", 00:10:45.441 "zoned": false, 00:10:45.441 "supported_io_types": { 00:10:45.441 "read": true, 00:10:45.441 "write": true, 00:10:45.441 "unmap": true, 00:10:45.441 "flush": true, 00:10:45.441 "reset": true, 00:10:45.441 "nvme_admin": false, 00:10:45.441 "nvme_io": false, 00:10:45.441 "nvme_io_md": false, 00:10:45.441 "write_zeroes": true, 00:10:45.441 "zcopy": true, 00:10:45.441 "get_zone_info": false, 00:10:45.441 "zone_management": false, 00:10:45.441 "zone_append": false, 00:10:45.441 "compare": false, 00:10:45.441 "compare_and_write": false, 00:10:45.441 "abort": true, 00:10:45.441 "seek_hole": false, 00:10:45.442 "seek_data": false, 00:10:45.442 "copy": true, 00:10:45.442 "nvme_iov_md": false 00:10:45.442 }, 00:10:45.442 "memory_domains": [ 00:10:45.442 { 00:10:45.442 "dma_device_id": "system", 00:10:45.442 "dma_device_type": 1 00:10:45.442 }, 00:10:45.442 { 00:10:45.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.442 "dma_device_type": 2 00:10:45.442 } 00:10:45.442 ], 00:10:45.442 "driver_specific": {} 00:10:45.442 } 00:10:45.442 ] 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.442 "name": "Existed_Raid", 00:10:45.442 "uuid": "a6a1c6d1-9746-4252-b014-b3082d24d4ba", 00:10:45.442 "strip_size_kb": 0, 00:10:45.442 "state": "online", 00:10:45.442 "raid_level": "raid1", 00:10:45.442 "superblock": false, 00:10:45.442 "num_base_bdevs": 4, 00:10:45.442 "num_base_bdevs_discovered": 4, 00:10:45.442 "num_base_bdevs_operational": 4, 00:10:45.442 "base_bdevs_list": [ 00:10:45.442 { 00:10:45.442 "name": "BaseBdev1", 00:10:45.442 "uuid": "829df708-2b02-4b50-9e0a-0f7302847bc8", 00:10:45.442 "is_configured": true, 00:10:45.442 "data_offset": 0, 00:10:45.442 "data_size": 65536 00:10:45.442 }, 00:10:45.442 { 00:10:45.442 "name": "BaseBdev2", 00:10:45.442 "uuid": "fa0e5370-2b66-42c3-9689-8fff92b69fe0", 00:10:45.442 "is_configured": true, 00:10:45.442 "data_offset": 0, 00:10:45.442 "data_size": 65536 00:10:45.442 }, 00:10:45.442 { 00:10:45.442 "name": "BaseBdev3", 00:10:45.442 "uuid": "776f933a-93b3-4cb0-b17f-f7c6d55fbbc5", 00:10:45.442 "is_configured": true, 00:10:45.442 "data_offset": 0, 00:10:45.442 "data_size": 65536 00:10:45.442 }, 00:10:45.442 { 00:10:45.442 "name": "BaseBdev4", 00:10:45.442 "uuid": "38a3eb66-4e1c-42d7-8741-b67361588671", 00:10:45.442 "is_configured": true, 00:10:45.442 "data_offset": 0, 00:10:45.442 "data_size": 65536 00:10:45.442 } 00:10:45.442 ] 00:10:45.442 }' 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.442 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.020 [2024-10-09 01:30:44.688728] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.020 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.020 "name": "Existed_Raid", 00:10:46.020 "aliases": [ 00:10:46.020 "a6a1c6d1-9746-4252-b014-b3082d24d4ba" 00:10:46.020 ], 00:10:46.020 "product_name": "Raid Volume", 00:10:46.020 "block_size": 512, 00:10:46.020 "num_blocks": 65536, 00:10:46.020 "uuid": "a6a1c6d1-9746-4252-b014-b3082d24d4ba", 00:10:46.020 "assigned_rate_limits": { 00:10:46.020 "rw_ios_per_sec": 0, 00:10:46.020 "rw_mbytes_per_sec": 0, 00:10:46.020 "r_mbytes_per_sec": 0, 00:10:46.020 "w_mbytes_per_sec": 0 00:10:46.020 }, 00:10:46.020 "claimed": false, 00:10:46.020 "zoned": false, 00:10:46.020 "supported_io_types": { 00:10:46.020 "read": true, 00:10:46.020 "write": true, 00:10:46.020 "unmap": false, 00:10:46.020 "flush": false, 00:10:46.020 "reset": true, 00:10:46.020 "nvme_admin": false, 00:10:46.020 "nvme_io": false, 00:10:46.020 "nvme_io_md": false, 00:10:46.020 "write_zeroes": true, 00:10:46.020 "zcopy": false, 00:10:46.020 "get_zone_info": false, 00:10:46.020 "zone_management": false, 00:10:46.020 "zone_append": false, 00:10:46.020 "compare": false, 00:10:46.020 "compare_and_write": false, 00:10:46.020 "abort": false, 00:10:46.020 "seek_hole": false, 00:10:46.020 "seek_data": false, 00:10:46.020 "copy": false, 00:10:46.020 "nvme_iov_md": false 00:10:46.020 }, 00:10:46.020 "memory_domains": [ 00:10:46.020 { 00:10:46.020 "dma_device_id": "system", 00:10:46.020 "dma_device_type": 1 00:10:46.020 }, 00:10:46.020 { 00:10:46.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.020 "dma_device_type": 2 00:10:46.020 }, 00:10:46.020 { 00:10:46.020 "dma_device_id": "system", 00:10:46.020 "dma_device_type": 1 00:10:46.020 }, 00:10:46.020 { 00:10:46.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.020 "dma_device_type": 2 00:10:46.020 }, 00:10:46.020 { 00:10:46.020 "dma_device_id": "system", 00:10:46.020 "dma_device_type": 1 00:10:46.020 }, 00:10:46.020 { 00:10:46.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.021 "dma_device_type": 2 00:10:46.021 }, 00:10:46.021 { 00:10:46.021 "dma_device_id": "system", 00:10:46.021 "dma_device_type": 1 00:10:46.021 }, 00:10:46.021 { 00:10:46.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.021 "dma_device_type": 2 00:10:46.021 } 00:10:46.021 ], 00:10:46.021 "driver_specific": { 00:10:46.021 "raid": { 00:10:46.021 "uuid": "a6a1c6d1-9746-4252-b014-b3082d24d4ba", 00:10:46.021 "strip_size_kb": 0, 00:10:46.021 "state": "online", 00:10:46.021 "raid_level": "raid1", 00:10:46.021 "superblock": false, 00:10:46.021 "num_base_bdevs": 4, 00:10:46.021 "num_base_bdevs_discovered": 4, 00:10:46.021 "num_base_bdevs_operational": 4, 00:10:46.021 "base_bdevs_list": [ 00:10:46.021 { 00:10:46.021 "name": "BaseBdev1", 00:10:46.021 "uuid": "829df708-2b02-4b50-9e0a-0f7302847bc8", 00:10:46.021 "is_configured": true, 00:10:46.021 "data_offset": 0, 00:10:46.021 "data_size": 65536 00:10:46.021 }, 00:10:46.021 { 00:10:46.021 "name": "BaseBdev2", 00:10:46.021 "uuid": "fa0e5370-2b66-42c3-9689-8fff92b69fe0", 00:10:46.021 "is_configured": true, 00:10:46.021 "data_offset": 0, 00:10:46.021 "data_size": 65536 00:10:46.021 }, 00:10:46.021 { 00:10:46.021 "name": "BaseBdev3", 00:10:46.021 "uuid": "776f933a-93b3-4cb0-b17f-f7c6d55fbbc5", 00:10:46.021 "is_configured": true, 00:10:46.021 "data_offset": 0, 00:10:46.021 "data_size": 65536 00:10:46.021 }, 00:10:46.021 { 00:10:46.021 "name": "BaseBdev4", 00:10:46.021 "uuid": "38a3eb66-4e1c-42d7-8741-b67361588671", 00:10:46.021 "is_configured": true, 00:10:46.021 "data_offset": 0, 00:10:46.021 "data_size": 65536 00:10:46.021 } 00:10:46.021 ] 00:10:46.021 } 00:10:46.021 } 00:10:46.021 }' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:46.021 BaseBdev2 00:10:46.021 BaseBdev3 00:10:46.021 BaseBdev4' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.021 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.281 01:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.281 [2024-10-09 01:30:44.980533] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.281 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.281 "name": "Existed_Raid", 00:10:46.281 "uuid": "a6a1c6d1-9746-4252-b014-b3082d24d4ba", 00:10:46.281 "strip_size_kb": 0, 00:10:46.281 "state": "online", 00:10:46.281 "raid_level": "raid1", 00:10:46.281 "superblock": false, 00:10:46.281 "num_base_bdevs": 4, 00:10:46.281 "num_base_bdevs_discovered": 3, 00:10:46.281 "num_base_bdevs_operational": 3, 00:10:46.281 "base_bdevs_list": [ 00:10:46.281 { 00:10:46.281 "name": null, 00:10:46.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.281 "is_configured": false, 00:10:46.281 "data_offset": 0, 00:10:46.281 "data_size": 65536 00:10:46.281 }, 00:10:46.281 { 00:10:46.281 "name": "BaseBdev2", 00:10:46.281 "uuid": "fa0e5370-2b66-42c3-9689-8fff92b69fe0", 00:10:46.281 "is_configured": true, 00:10:46.281 "data_offset": 0, 00:10:46.281 "data_size": 65536 00:10:46.281 }, 00:10:46.281 { 00:10:46.281 "name": "BaseBdev3", 00:10:46.281 "uuid": "776f933a-93b3-4cb0-b17f-f7c6d55fbbc5", 00:10:46.281 "is_configured": true, 00:10:46.282 "data_offset": 0, 00:10:46.282 "data_size": 65536 00:10:46.282 }, 00:10:46.282 { 00:10:46.282 "name": "BaseBdev4", 00:10:46.282 "uuid": "38a3eb66-4e1c-42d7-8741-b67361588671", 00:10:46.282 "is_configured": true, 00:10:46.282 "data_offset": 0, 00:10:46.282 "data_size": 65536 00:10:46.282 } 00:10:46.282 ] 00:10:46.282 }' 00:10:46.282 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.282 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.541 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.541 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.541 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.541 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.541 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.541 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.801 [2024-10-09 01:30:45.457092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.801 [2024-10-09 01:30:45.541273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.801 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.802 [2024-10-09 01:30:45.620287] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:46.802 [2024-10-09 01:30:45.620438] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.802 [2024-10-09 01:30:45.640826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.802 [2024-10-09 01:30:45.640936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.802 [2024-10-09 01:30:45.640981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.802 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.062 BaseBdev2 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.062 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.062 [ 00:10:47.062 { 00:10:47.063 "name": "BaseBdev2", 00:10:47.063 "aliases": [ 00:10:47.063 "9af5e269-5721-4cae-822e-b17e70c7fc51" 00:10:47.063 ], 00:10:47.063 "product_name": "Malloc disk", 00:10:47.063 "block_size": 512, 00:10:47.063 "num_blocks": 65536, 00:10:47.063 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:47.063 "assigned_rate_limits": { 00:10:47.063 "rw_ios_per_sec": 0, 00:10:47.063 "rw_mbytes_per_sec": 0, 00:10:47.063 "r_mbytes_per_sec": 0, 00:10:47.063 "w_mbytes_per_sec": 0 00:10:47.063 }, 00:10:47.063 "claimed": false, 00:10:47.063 "zoned": false, 00:10:47.063 "supported_io_types": { 00:10:47.063 "read": true, 00:10:47.063 "write": true, 00:10:47.063 "unmap": true, 00:10:47.063 "flush": true, 00:10:47.063 "reset": true, 00:10:47.063 "nvme_admin": false, 00:10:47.063 "nvme_io": false, 00:10:47.063 "nvme_io_md": false, 00:10:47.063 "write_zeroes": true, 00:10:47.063 "zcopy": true, 00:10:47.063 "get_zone_info": false, 00:10:47.063 "zone_management": false, 00:10:47.063 "zone_append": false, 00:10:47.063 "compare": false, 00:10:47.063 "compare_and_write": false, 00:10:47.063 "abort": true, 00:10:47.063 "seek_hole": false, 00:10:47.063 "seek_data": false, 00:10:47.063 "copy": true, 00:10:47.063 "nvme_iov_md": false 00:10:47.063 }, 00:10:47.063 "memory_domains": [ 00:10:47.063 { 00:10:47.063 "dma_device_id": "system", 00:10:47.063 "dma_device_type": 1 00:10:47.063 }, 00:10:47.063 { 00:10:47.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.063 "dma_device_type": 2 00:10:47.063 } 00:10:47.063 ], 00:10:47.063 "driver_specific": {} 00:10:47.063 } 00:10:47.063 ] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.063 BaseBdev3 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.063 [ 00:10:47.063 { 00:10:47.063 "name": "BaseBdev3", 00:10:47.063 "aliases": [ 00:10:47.063 "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac" 00:10:47.063 ], 00:10:47.063 "product_name": "Malloc disk", 00:10:47.063 "block_size": 512, 00:10:47.063 "num_blocks": 65536, 00:10:47.063 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:47.063 "assigned_rate_limits": { 00:10:47.063 "rw_ios_per_sec": 0, 00:10:47.063 "rw_mbytes_per_sec": 0, 00:10:47.063 "r_mbytes_per_sec": 0, 00:10:47.063 "w_mbytes_per_sec": 0 00:10:47.063 }, 00:10:47.063 "claimed": false, 00:10:47.063 "zoned": false, 00:10:47.063 "supported_io_types": { 00:10:47.063 "read": true, 00:10:47.063 "write": true, 00:10:47.063 "unmap": true, 00:10:47.063 "flush": true, 00:10:47.063 "reset": true, 00:10:47.063 "nvme_admin": false, 00:10:47.063 "nvme_io": false, 00:10:47.063 "nvme_io_md": false, 00:10:47.063 "write_zeroes": true, 00:10:47.063 "zcopy": true, 00:10:47.063 "get_zone_info": false, 00:10:47.063 "zone_management": false, 00:10:47.063 "zone_append": false, 00:10:47.063 "compare": false, 00:10:47.063 "compare_and_write": false, 00:10:47.063 "abort": true, 00:10:47.063 "seek_hole": false, 00:10:47.063 "seek_data": false, 00:10:47.063 "copy": true, 00:10:47.063 "nvme_iov_md": false 00:10:47.063 }, 00:10:47.063 "memory_domains": [ 00:10:47.063 { 00:10:47.063 "dma_device_id": "system", 00:10:47.063 "dma_device_type": 1 00:10:47.063 }, 00:10:47.063 { 00:10:47.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.063 "dma_device_type": 2 00:10:47.063 } 00:10:47.063 ], 00:10:47.063 "driver_specific": {} 00:10:47.063 } 00:10:47.063 ] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.063 BaseBdev4 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.063 [ 00:10:47.063 { 00:10:47.063 "name": "BaseBdev4", 00:10:47.063 "aliases": [ 00:10:47.063 "9927a227-e581-4eb2-95f6-83d5a17d23ad" 00:10:47.063 ], 00:10:47.063 "product_name": "Malloc disk", 00:10:47.063 "block_size": 512, 00:10:47.063 "num_blocks": 65536, 00:10:47.063 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:47.063 "assigned_rate_limits": { 00:10:47.063 "rw_ios_per_sec": 0, 00:10:47.063 "rw_mbytes_per_sec": 0, 00:10:47.063 "r_mbytes_per_sec": 0, 00:10:47.063 "w_mbytes_per_sec": 0 00:10:47.063 }, 00:10:47.063 "claimed": false, 00:10:47.063 "zoned": false, 00:10:47.063 "supported_io_types": { 00:10:47.063 "read": true, 00:10:47.063 "write": true, 00:10:47.063 "unmap": true, 00:10:47.063 "flush": true, 00:10:47.063 "reset": true, 00:10:47.063 "nvme_admin": false, 00:10:47.063 "nvme_io": false, 00:10:47.063 "nvme_io_md": false, 00:10:47.063 "write_zeroes": true, 00:10:47.063 "zcopy": true, 00:10:47.063 "get_zone_info": false, 00:10:47.063 "zone_management": false, 00:10:47.063 "zone_append": false, 00:10:47.063 "compare": false, 00:10:47.063 "compare_and_write": false, 00:10:47.063 "abort": true, 00:10:47.063 "seek_hole": false, 00:10:47.063 "seek_data": false, 00:10:47.063 "copy": true, 00:10:47.063 "nvme_iov_md": false 00:10:47.063 }, 00:10:47.063 "memory_domains": [ 00:10:47.063 { 00:10:47.063 "dma_device_id": "system", 00:10:47.063 "dma_device_type": 1 00:10:47.063 }, 00:10:47.063 { 00:10:47.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.063 "dma_device_type": 2 00:10:47.063 } 00:10:47.063 ], 00:10:47.063 "driver_specific": {} 00:10:47.063 } 00:10:47.063 ] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.063 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.063 [2024-10-09 01:30:45.882402] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.063 [2024-10-09 01:30:45.882486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.063 [2024-10-09 01:30:45.882513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.063 [2024-10-09 01:30:45.884713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.063 [2024-10-09 01:30:45.884810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.064 "name": "Existed_Raid", 00:10:47.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.064 "strip_size_kb": 0, 00:10:47.064 "state": "configuring", 00:10:47.064 "raid_level": "raid1", 00:10:47.064 "superblock": false, 00:10:47.064 "num_base_bdevs": 4, 00:10:47.064 "num_base_bdevs_discovered": 3, 00:10:47.064 "num_base_bdevs_operational": 4, 00:10:47.064 "base_bdevs_list": [ 00:10:47.064 { 00:10:47.064 "name": "BaseBdev1", 00:10:47.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.064 "is_configured": false, 00:10:47.064 "data_offset": 0, 00:10:47.064 "data_size": 0 00:10:47.064 }, 00:10:47.064 { 00:10:47.064 "name": "BaseBdev2", 00:10:47.064 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:47.064 "is_configured": true, 00:10:47.064 "data_offset": 0, 00:10:47.064 "data_size": 65536 00:10:47.064 }, 00:10:47.064 { 00:10:47.064 "name": "BaseBdev3", 00:10:47.064 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:47.064 "is_configured": true, 00:10:47.064 "data_offset": 0, 00:10:47.064 "data_size": 65536 00:10:47.064 }, 00:10:47.064 { 00:10:47.064 "name": "BaseBdev4", 00:10:47.064 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:47.064 "is_configured": true, 00:10:47.064 "data_offset": 0, 00:10:47.064 "data_size": 65536 00:10:47.064 } 00:10:47.064 ] 00:10:47.064 }' 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.064 01:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.634 [2024-10-09 01:30:46.358506] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.634 "name": "Existed_Raid", 00:10:47.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.634 "strip_size_kb": 0, 00:10:47.634 "state": "configuring", 00:10:47.634 "raid_level": "raid1", 00:10:47.634 "superblock": false, 00:10:47.634 "num_base_bdevs": 4, 00:10:47.634 "num_base_bdevs_discovered": 2, 00:10:47.634 "num_base_bdevs_operational": 4, 00:10:47.634 "base_bdevs_list": [ 00:10:47.634 { 00:10:47.634 "name": "BaseBdev1", 00:10:47.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.634 "is_configured": false, 00:10:47.634 "data_offset": 0, 00:10:47.634 "data_size": 0 00:10:47.634 }, 00:10:47.634 { 00:10:47.634 "name": null, 00:10:47.634 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:47.634 "is_configured": false, 00:10:47.634 "data_offset": 0, 00:10:47.634 "data_size": 65536 00:10:47.634 }, 00:10:47.634 { 00:10:47.634 "name": "BaseBdev3", 00:10:47.634 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:47.634 "is_configured": true, 00:10:47.634 "data_offset": 0, 00:10:47.634 "data_size": 65536 00:10:47.634 }, 00:10:47.634 { 00:10:47.634 "name": "BaseBdev4", 00:10:47.634 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:47.634 "is_configured": true, 00:10:47.634 "data_offset": 0, 00:10:47.634 "data_size": 65536 00:10:47.634 } 00:10:47.634 ] 00:10:47.634 }' 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.634 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.957 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.218 [2024-10-09 01:30:46.863326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.218 BaseBdev1 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.218 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.218 [ 00:10:48.218 { 00:10:48.218 "name": "BaseBdev1", 00:10:48.218 "aliases": [ 00:10:48.218 "4ca6896a-53db-4fc5-aafc-b7a09421c383" 00:10:48.218 ], 00:10:48.218 "product_name": "Malloc disk", 00:10:48.218 "block_size": 512, 00:10:48.218 "num_blocks": 65536, 00:10:48.218 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:48.218 "assigned_rate_limits": { 00:10:48.218 "rw_ios_per_sec": 0, 00:10:48.218 "rw_mbytes_per_sec": 0, 00:10:48.218 "r_mbytes_per_sec": 0, 00:10:48.218 "w_mbytes_per_sec": 0 00:10:48.218 }, 00:10:48.218 "claimed": true, 00:10:48.218 "claim_type": "exclusive_write", 00:10:48.218 "zoned": false, 00:10:48.218 "supported_io_types": { 00:10:48.218 "read": true, 00:10:48.218 "write": true, 00:10:48.218 "unmap": true, 00:10:48.218 "flush": true, 00:10:48.218 "reset": true, 00:10:48.218 "nvme_admin": false, 00:10:48.218 "nvme_io": false, 00:10:48.218 "nvme_io_md": false, 00:10:48.218 "write_zeroes": true, 00:10:48.218 "zcopy": true, 00:10:48.218 "get_zone_info": false, 00:10:48.218 "zone_management": false, 00:10:48.218 "zone_append": false, 00:10:48.218 "compare": false, 00:10:48.218 "compare_and_write": false, 00:10:48.219 "abort": true, 00:10:48.219 "seek_hole": false, 00:10:48.219 "seek_data": false, 00:10:48.219 "copy": true, 00:10:48.219 "nvme_iov_md": false 00:10:48.219 }, 00:10:48.219 "memory_domains": [ 00:10:48.219 { 00:10:48.219 "dma_device_id": "system", 00:10:48.219 "dma_device_type": 1 00:10:48.219 }, 00:10:48.219 { 00:10:48.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.219 "dma_device_type": 2 00:10:48.219 } 00:10:48.219 ], 00:10:48.219 "driver_specific": {} 00:10:48.219 } 00:10:48.219 ] 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.219 "name": "Existed_Raid", 00:10:48.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.219 "strip_size_kb": 0, 00:10:48.219 "state": "configuring", 00:10:48.219 "raid_level": "raid1", 00:10:48.219 "superblock": false, 00:10:48.219 "num_base_bdevs": 4, 00:10:48.219 "num_base_bdevs_discovered": 3, 00:10:48.219 "num_base_bdevs_operational": 4, 00:10:48.219 "base_bdevs_list": [ 00:10:48.219 { 00:10:48.219 "name": "BaseBdev1", 00:10:48.219 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:48.219 "is_configured": true, 00:10:48.219 "data_offset": 0, 00:10:48.219 "data_size": 65536 00:10:48.219 }, 00:10:48.219 { 00:10:48.219 "name": null, 00:10:48.219 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:48.219 "is_configured": false, 00:10:48.219 "data_offset": 0, 00:10:48.219 "data_size": 65536 00:10:48.219 }, 00:10:48.219 { 00:10:48.219 "name": "BaseBdev3", 00:10:48.219 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:48.219 "is_configured": true, 00:10:48.219 "data_offset": 0, 00:10:48.219 "data_size": 65536 00:10:48.219 }, 00:10:48.219 { 00:10:48.219 "name": "BaseBdev4", 00:10:48.219 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:48.219 "is_configured": true, 00:10:48.219 "data_offset": 0, 00:10:48.219 "data_size": 65536 00:10:48.219 } 00:10:48.219 ] 00:10:48.219 }' 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.219 01:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.479 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.479 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:48.479 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.479 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.740 [2024-10-09 01:30:47.415503] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.740 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.740 "name": "Existed_Raid", 00:10:48.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.740 "strip_size_kb": 0, 00:10:48.740 "state": "configuring", 00:10:48.740 "raid_level": "raid1", 00:10:48.740 "superblock": false, 00:10:48.740 "num_base_bdevs": 4, 00:10:48.740 "num_base_bdevs_discovered": 2, 00:10:48.740 "num_base_bdevs_operational": 4, 00:10:48.740 "base_bdevs_list": [ 00:10:48.740 { 00:10:48.740 "name": "BaseBdev1", 00:10:48.740 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:48.740 "is_configured": true, 00:10:48.740 "data_offset": 0, 00:10:48.740 "data_size": 65536 00:10:48.740 }, 00:10:48.740 { 00:10:48.740 "name": null, 00:10:48.740 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:48.740 "is_configured": false, 00:10:48.740 "data_offset": 0, 00:10:48.741 "data_size": 65536 00:10:48.741 }, 00:10:48.741 { 00:10:48.741 "name": null, 00:10:48.741 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:48.741 "is_configured": false, 00:10:48.741 "data_offset": 0, 00:10:48.741 "data_size": 65536 00:10:48.741 }, 00:10:48.741 { 00:10:48.741 "name": "BaseBdev4", 00:10:48.741 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:48.741 "is_configured": true, 00:10:48.741 "data_offset": 0, 00:10:48.741 "data_size": 65536 00:10:48.741 } 00:10:48.741 ] 00:10:48.741 }' 00:10:48.741 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.741 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.001 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.001 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.001 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.001 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.262 [2024-10-09 01:30:47.935707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.262 "name": "Existed_Raid", 00:10:49.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.262 "strip_size_kb": 0, 00:10:49.262 "state": "configuring", 00:10:49.262 "raid_level": "raid1", 00:10:49.262 "superblock": false, 00:10:49.262 "num_base_bdevs": 4, 00:10:49.262 "num_base_bdevs_discovered": 3, 00:10:49.262 "num_base_bdevs_operational": 4, 00:10:49.262 "base_bdevs_list": [ 00:10:49.262 { 00:10:49.262 "name": "BaseBdev1", 00:10:49.262 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:49.262 "is_configured": true, 00:10:49.262 "data_offset": 0, 00:10:49.262 "data_size": 65536 00:10:49.262 }, 00:10:49.262 { 00:10:49.262 "name": null, 00:10:49.262 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:49.262 "is_configured": false, 00:10:49.262 "data_offset": 0, 00:10:49.262 "data_size": 65536 00:10:49.262 }, 00:10:49.262 { 00:10:49.262 "name": "BaseBdev3", 00:10:49.262 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:49.262 "is_configured": true, 00:10:49.262 "data_offset": 0, 00:10:49.262 "data_size": 65536 00:10:49.262 }, 00:10:49.262 { 00:10:49.262 "name": "BaseBdev4", 00:10:49.262 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:49.262 "is_configured": true, 00:10:49.262 "data_offset": 0, 00:10:49.262 "data_size": 65536 00:10:49.262 } 00:10:49.262 ] 00:10:49.262 }' 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.262 01:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.523 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.523 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.523 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:49.523 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.523 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.783 [2024-10-09 01:30:48.435845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.783 "name": "Existed_Raid", 00:10:49.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.783 "strip_size_kb": 0, 00:10:49.783 "state": "configuring", 00:10:49.783 "raid_level": "raid1", 00:10:49.783 "superblock": false, 00:10:49.783 "num_base_bdevs": 4, 00:10:49.783 "num_base_bdevs_discovered": 2, 00:10:49.783 "num_base_bdevs_operational": 4, 00:10:49.783 "base_bdevs_list": [ 00:10:49.783 { 00:10:49.783 "name": null, 00:10:49.783 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:49.783 "is_configured": false, 00:10:49.783 "data_offset": 0, 00:10:49.783 "data_size": 65536 00:10:49.783 }, 00:10:49.783 { 00:10:49.783 "name": null, 00:10:49.783 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:49.783 "is_configured": false, 00:10:49.783 "data_offset": 0, 00:10:49.783 "data_size": 65536 00:10:49.783 }, 00:10:49.783 { 00:10:49.783 "name": "BaseBdev3", 00:10:49.783 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:49.783 "is_configured": true, 00:10:49.783 "data_offset": 0, 00:10:49.783 "data_size": 65536 00:10:49.783 }, 00:10:49.783 { 00:10:49.783 "name": "BaseBdev4", 00:10:49.783 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:49.783 "is_configured": true, 00:10:49.783 "data_offset": 0, 00:10:49.783 "data_size": 65536 00:10:49.783 } 00:10:49.783 ] 00:10:49.783 }' 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.783 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.043 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.043 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.043 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.043 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.043 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.302 [2024-10-09 01:30:48.943602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.302 01:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.302 "name": "Existed_Raid", 00:10:50.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.302 "strip_size_kb": 0, 00:10:50.302 "state": "configuring", 00:10:50.302 "raid_level": "raid1", 00:10:50.302 "superblock": false, 00:10:50.302 "num_base_bdevs": 4, 00:10:50.302 "num_base_bdevs_discovered": 3, 00:10:50.302 "num_base_bdevs_operational": 4, 00:10:50.302 "base_bdevs_list": [ 00:10:50.302 { 00:10:50.302 "name": null, 00:10:50.302 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:50.302 "is_configured": false, 00:10:50.302 "data_offset": 0, 00:10:50.302 "data_size": 65536 00:10:50.302 }, 00:10:50.302 { 00:10:50.302 "name": "BaseBdev2", 00:10:50.302 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:50.302 "is_configured": true, 00:10:50.302 "data_offset": 0, 00:10:50.302 "data_size": 65536 00:10:50.302 }, 00:10:50.302 { 00:10:50.302 "name": "BaseBdev3", 00:10:50.302 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:50.302 "is_configured": true, 00:10:50.302 "data_offset": 0, 00:10:50.302 "data_size": 65536 00:10:50.302 }, 00:10:50.302 { 00:10:50.302 "name": "BaseBdev4", 00:10:50.302 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:50.302 "is_configured": true, 00:10:50.302 "data_offset": 0, 00:10:50.302 "data_size": 65536 00:10:50.302 } 00:10:50.302 ] 00:10:50.302 }' 00:10:50.302 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.302 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.562 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.562 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.562 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.562 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.562 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.562 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:50.562 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.562 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.563 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.563 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ca6896a-53db-4fc5-aafc-b7a09421c383 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.823 [2024-10-09 01:30:49.516371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:50.823 [2024-10-09 01:30:49.516511] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:50.823 [2024-10-09 01:30:49.516570] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:50.823 [2024-10-09 01:30:49.516909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:10:50.823 [2024-10-09 01:30:49.517094] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:50.823 [2024-10-09 01:30:49.517140] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:50.823 [2024-10-09 01:30:49.517375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.823 NewBaseBdev 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.823 [ 00:10:50.823 { 00:10:50.823 "name": "NewBaseBdev", 00:10:50.823 "aliases": [ 00:10:50.823 "4ca6896a-53db-4fc5-aafc-b7a09421c383" 00:10:50.823 ], 00:10:50.823 "product_name": "Malloc disk", 00:10:50.823 "block_size": 512, 00:10:50.823 "num_blocks": 65536, 00:10:50.823 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:50.823 "assigned_rate_limits": { 00:10:50.823 "rw_ios_per_sec": 0, 00:10:50.823 "rw_mbytes_per_sec": 0, 00:10:50.823 "r_mbytes_per_sec": 0, 00:10:50.823 "w_mbytes_per_sec": 0 00:10:50.823 }, 00:10:50.823 "claimed": true, 00:10:50.823 "claim_type": "exclusive_write", 00:10:50.823 "zoned": false, 00:10:50.823 "supported_io_types": { 00:10:50.823 "read": true, 00:10:50.823 "write": true, 00:10:50.823 "unmap": true, 00:10:50.823 "flush": true, 00:10:50.823 "reset": true, 00:10:50.823 "nvme_admin": false, 00:10:50.823 "nvme_io": false, 00:10:50.823 "nvme_io_md": false, 00:10:50.823 "write_zeroes": true, 00:10:50.823 "zcopy": true, 00:10:50.823 "get_zone_info": false, 00:10:50.823 "zone_management": false, 00:10:50.823 "zone_append": false, 00:10:50.823 "compare": false, 00:10:50.823 "compare_and_write": false, 00:10:50.823 "abort": true, 00:10:50.823 "seek_hole": false, 00:10:50.823 "seek_data": false, 00:10:50.823 "copy": true, 00:10:50.823 "nvme_iov_md": false 00:10:50.823 }, 00:10:50.823 "memory_domains": [ 00:10:50.823 { 00:10:50.823 "dma_device_id": "system", 00:10:50.823 "dma_device_type": 1 00:10:50.823 }, 00:10:50.823 { 00:10:50.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.823 "dma_device_type": 2 00:10:50.823 } 00:10:50.823 ], 00:10:50.823 "driver_specific": {} 00:10:50.823 } 00:10:50.823 ] 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.823 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.824 "name": "Existed_Raid", 00:10:50.824 "uuid": "91447720-c575-4725-aec5-023280ae7c30", 00:10:50.824 "strip_size_kb": 0, 00:10:50.824 "state": "online", 00:10:50.824 "raid_level": "raid1", 00:10:50.824 "superblock": false, 00:10:50.824 "num_base_bdevs": 4, 00:10:50.824 "num_base_bdevs_discovered": 4, 00:10:50.824 "num_base_bdevs_operational": 4, 00:10:50.824 "base_bdevs_list": [ 00:10:50.824 { 00:10:50.824 "name": "NewBaseBdev", 00:10:50.824 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:50.824 "is_configured": true, 00:10:50.824 "data_offset": 0, 00:10:50.824 "data_size": 65536 00:10:50.824 }, 00:10:50.824 { 00:10:50.824 "name": "BaseBdev2", 00:10:50.824 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:50.824 "is_configured": true, 00:10:50.824 "data_offset": 0, 00:10:50.824 "data_size": 65536 00:10:50.824 }, 00:10:50.824 { 00:10:50.824 "name": "BaseBdev3", 00:10:50.824 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:50.824 "is_configured": true, 00:10:50.824 "data_offset": 0, 00:10:50.824 "data_size": 65536 00:10:50.824 }, 00:10:50.824 { 00:10:50.824 "name": "BaseBdev4", 00:10:50.824 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:50.824 "is_configured": true, 00:10:50.824 "data_offset": 0, 00:10:50.824 "data_size": 65536 00:10:50.824 } 00:10:50.824 ] 00:10:50.824 }' 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.824 01:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.395 01:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:51.395 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:51.395 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.395 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.395 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.395 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.395 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:51.395 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.395 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.396 [2024-10-09 01:30:50.012856] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.396 "name": "Existed_Raid", 00:10:51.396 "aliases": [ 00:10:51.396 "91447720-c575-4725-aec5-023280ae7c30" 00:10:51.396 ], 00:10:51.396 "product_name": "Raid Volume", 00:10:51.396 "block_size": 512, 00:10:51.396 "num_blocks": 65536, 00:10:51.396 "uuid": "91447720-c575-4725-aec5-023280ae7c30", 00:10:51.396 "assigned_rate_limits": { 00:10:51.396 "rw_ios_per_sec": 0, 00:10:51.396 "rw_mbytes_per_sec": 0, 00:10:51.396 "r_mbytes_per_sec": 0, 00:10:51.396 "w_mbytes_per_sec": 0 00:10:51.396 }, 00:10:51.396 "claimed": false, 00:10:51.396 "zoned": false, 00:10:51.396 "supported_io_types": { 00:10:51.396 "read": true, 00:10:51.396 "write": true, 00:10:51.396 "unmap": false, 00:10:51.396 "flush": false, 00:10:51.396 "reset": true, 00:10:51.396 "nvme_admin": false, 00:10:51.396 "nvme_io": false, 00:10:51.396 "nvme_io_md": false, 00:10:51.396 "write_zeroes": true, 00:10:51.396 "zcopy": false, 00:10:51.396 "get_zone_info": false, 00:10:51.396 "zone_management": false, 00:10:51.396 "zone_append": false, 00:10:51.396 "compare": false, 00:10:51.396 "compare_and_write": false, 00:10:51.396 "abort": false, 00:10:51.396 "seek_hole": false, 00:10:51.396 "seek_data": false, 00:10:51.396 "copy": false, 00:10:51.396 "nvme_iov_md": false 00:10:51.396 }, 00:10:51.396 "memory_domains": [ 00:10:51.396 { 00:10:51.396 "dma_device_id": "system", 00:10:51.396 "dma_device_type": 1 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.396 "dma_device_type": 2 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "dma_device_id": "system", 00:10:51.396 "dma_device_type": 1 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.396 "dma_device_type": 2 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "dma_device_id": "system", 00:10:51.396 "dma_device_type": 1 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.396 "dma_device_type": 2 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "dma_device_id": "system", 00:10:51.396 "dma_device_type": 1 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.396 "dma_device_type": 2 00:10:51.396 } 00:10:51.396 ], 00:10:51.396 "driver_specific": { 00:10:51.396 "raid": { 00:10:51.396 "uuid": "91447720-c575-4725-aec5-023280ae7c30", 00:10:51.396 "strip_size_kb": 0, 00:10:51.396 "state": "online", 00:10:51.396 "raid_level": "raid1", 00:10:51.396 "superblock": false, 00:10:51.396 "num_base_bdevs": 4, 00:10:51.396 "num_base_bdevs_discovered": 4, 00:10:51.396 "num_base_bdevs_operational": 4, 00:10:51.396 "base_bdevs_list": [ 00:10:51.396 { 00:10:51.396 "name": "NewBaseBdev", 00:10:51.396 "uuid": "4ca6896a-53db-4fc5-aafc-b7a09421c383", 00:10:51.396 "is_configured": true, 00:10:51.396 "data_offset": 0, 00:10:51.396 "data_size": 65536 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "name": "BaseBdev2", 00:10:51.396 "uuid": "9af5e269-5721-4cae-822e-b17e70c7fc51", 00:10:51.396 "is_configured": true, 00:10:51.396 "data_offset": 0, 00:10:51.396 "data_size": 65536 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "name": "BaseBdev3", 00:10:51.396 "uuid": "b6a638c1-da6b-4fba-9ac5-af9f2b2746ac", 00:10:51.396 "is_configured": true, 00:10:51.396 "data_offset": 0, 00:10:51.396 "data_size": 65536 00:10:51.396 }, 00:10:51.396 { 00:10:51.396 "name": "BaseBdev4", 00:10:51.396 "uuid": "9927a227-e581-4eb2-95f6-83d5a17d23ad", 00:10:51.396 "is_configured": true, 00:10:51.396 "data_offset": 0, 00:10:51.396 "data_size": 65536 00:10:51.396 } 00:10:51.396 ] 00:10:51.396 } 00:10:51.396 } 00:10:51.396 }' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:51.396 BaseBdev2 00:10:51.396 BaseBdev3 00:10:51.396 BaseBdev4' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.396 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.657 [2024-10-09 01:30:50.340652] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.657 [2024-10-09 01:30:50.340721] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.657 [2024-10-09 01:30:50.340836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.657 [2024-10-09 01:30:50.341135] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.657 [2024-10-09 01:30:50.341184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 85110 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 85110 ']' 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 85110 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85110 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85110' 00:10:51.657 killing process with pid 85110 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 85110 00:10:51.657 [2024-10-09 01:30:50.390150] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.657 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 85110 00:10:51.657 [2024-10-09 01:30:50.462732] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:52.229 00:10:52.229 real 0m9.959s 00:10:52.229 user 0m16.667s 00:10:52.229 sys 0m2.200s 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 ************************************ 00:10:52.229 END TEST raid_state_function_test 00:10:52.229 ************************************ 00:10:52.229 01:30:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:52.229 01:30:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:52.229 01:30:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.229 01:30:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 ************************************ 00:10:52.229 START TEST raid_state_function_test_sb 00:10:52.229 ************************************ 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=85765 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85765' 00:10:52.229 Process raid pid: 85765 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 85765 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85765 ']' 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.229 01:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.229 [2024-10-09 01:30:51.003482] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:52.229 [2024-10-09 01:30:51.003635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.490 [2024-10-09 01:30:51.141289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:52.490 [2024-10-09 01:30:51.169437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.490 [2024-10-09 01:30:51.237781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.490 [2024-10-09 01:30:51.313252] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.490 [2024-10-09 01:30:51.313295] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.060 [2024-10-09 01:30:51.829377] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.060 [2024-10-09 01:30:51.829434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.060 [2024-10-09 01:30:51.829449] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.060 [2024-10-09 01:30:51.829457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.060 [2024-10-09 01:30:51.829472] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.060 [2024-10-09 01:30:51.829479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.060 [2024-10-09 01:30:51.829487] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.060 [2024-10-09 01:30:51.829494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.060 "name": "Existed_Raid", 00:10:53.060 "uuid": "8aa58f29-db37-4902-9d81-14fdfac4fc4d", 00:10:53.060 "strip_size_kb": 0, 00:10:53.060 "state": "configuring", 00:10:53.060 "raid_level": "raid1", 00:10:53.060 "superblock": true, 00:10:53.060 "num_base_bdevs": 4, 00:10:53.060 "num_base_bdevs_discovered": 0, 00:10:53.060 "num_base_bdevs_operational": 4, 00:10:53.060 "base_bdevs_list": [ 00:10:53.060 { 00:10:53.060 "name": "BaseBdev1", 00:10:53.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.060 "is_configured": false, 00:10:53.060 "data_offset": 0, 00:10:53.060 "data_size": 0 00:10:53.060 }, 00:10:53.060 { 00:10:53.060 "name": "BaseBdev2", 00:10:53.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.060 "is_configured": false, 00:10:53.060 "data_offset": 0, 00:10:53.060 "data_size": 0 00:10:53.060 }, 00:10:53.060 { 00:10:53.060 "name": "BaseBdev3", 00:10:53.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.060 "is_configured": false, 00:10:53.060 "data_offset": 0, 00:10:53.060 "data_size": 0 00:10:53.060 }, 00:10:53.060 { 00:10:53.060 "name": "BaseBdev4", 00:10:53.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.060 "is_configured": false, 00:10:53.060 "data_offset": 0, 00:10:53.060 "data_size": 0 00:10:53.060 } 00:10:53.060 ] 00:10:53.060 }' 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.060 01:30:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.630 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.631 [2024-10-09 01:30:52.329360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.631 [2024-10-09 01:30:52.329402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.631 [2024-10-09 01:30:52.341386] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.631 [2024-10-09 01:30:52.341459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.631 [2024-10-09 01:30:52.341491] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.631 [2024-10-09 01:30:52.341512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.631 [2024-10-09 01:30:52.341546] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.631 [2024-10-09 01:30:52.341565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.631 [2024-10-09 01:30:52.341593] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.631 [2024-10-09 01:30:52.341620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.631 [2024-10-09 01:30:52.368327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.631 BaseBdev1 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.631 [ 00:10:53.631 { 00:10:53.631 "name": "BaseBdev1", 00:10:53.631 "aliases": [ 00:10:53.631 "36271054-032b-4b3c-8972-55a8e2d6a755" 00:10:53.631 ], 00:10:53.631 "product_name": "Malloc disk", 00:10:53.631 "block_size": 512, 00:10:53.631 "num_blocks": 65536, 00:10:53.631 "uuid": "36271054-032b-4b3c-8972-55a8e2d6a755", 00:10:53.631 "assigned_rate_limits": { 00:10:53.631 "rw_ios_per_sec": 0, 00:10:53.631 "rw_mbytes_per_sec": 0, 00:10:53.631 "r_mbytes_per_sec": 0, 00:10:53.631 "w_mbytes_per_sec": 0 00:10:53.631 }, 00:10:53.631 "claimed": true, 00:10:53.631 "claim_type": "exclusive_write", 00:10:53.631 "zoned": false, 00:10:53.631 "supported_io_types": { 00:10:53.631 "read": true, 00:10:53.631 "write": true, 00:10:53.631 "unmap": true, 00:10:53.631 "flush": true, 00:10:53.631 "reset": true, 00:10:53.631 "nvme_admin": false, 00:10:53.631 "nvme_io": false, 00:10:53.631 "nvme_io_md": false, 00:10:53.631 "write_zeroes": true, 00:10:53.631 "zcopy": true, 00:10:53.631 "get_zone_info": false, 00:10:53.631 "zone_management": false, 00:10:53.631 "zone_append": false, 00:10:53.631 "compare": false, 00:10:53.631 "compare_and_write": false, 00:10:53.631 "abort": true, 00:10:53.631 "seek_hole": false, 00:10:53.631 "seek_data": false, 00:10:53.631 "copy": true, 00:10:53.631 "nvme_iov_md": false 00:10:53.631 }, 00:10:53.631 "memory_domains": [ 00:10:53.631 { 00:10:53.631 "dma_device_id": "system", 00:10:53.631 "dma_device_type": 1 00:10:53.631 }, 00:10:53.631 { 00:10:53.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.631 "dma_device_type": 2 00:10:53.631 } 00:10:53.631 ], 00:10:53.631 "driver_specific": {} 00:10:53.631 } 00:10:53.631 ] 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.631 "name": "Existed_Raid", 00:10:53.631 "uuid": "7f796b6b-377b-4d32-968b-ae2aafe251e3", 00:10:53.631 "strip_size_kb": 0, 00:10:53.631 "state": "configuring", 00:10:53.631 "raid_level": "raid1", 00:10:53.631 "superblock": true, 00:10:53.631 "num_base_bdevs": 4, 00:10:53.631 "num_base_bdevs_discovered": 1, 00:10:53.631 "num_base_bdevs_operational": 4, 00:10:53.631 "base_bdevs_list": [ 00:10:53.631 { 00:10:53.631 "name": "BaseBdev1", 00:10:53.631 "uuid": "36271054-032b-4b3c-8972-55a8e2d6a755", 00:10:53.631 "is_configured": true, 00:10:53.631 "data_offset": 2048, 00:10:53.631 "data_size": 63488 00:10:53.631 }, 00:10:53.631 { 00:10:53.631 "name": "BaseBdev2", 00:10:53.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.631 "is_configured": false, 00:10:53.631 "data_offset": 0, 00:10:53.631 "data_size": 0 00:10:53.631 }, 00:10:53.631 { 00:10:53.631 "name": "BaseBdev3", 00:10:53.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.631 "is_configured": false, 00:10:53.631 "data_offset": 0, 00:10:53.631 "data_size": 0 00:10:53.631 }, 00:10:53.631 { 00:10:53.631 "name": "BaseBdev4", 00:10:53.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.631 "is_configured": false, 00:10:53.631 "data_offset": 0, 00:10:53.631 "data_size": 0 00:10:53.631 } 00:10:53.631 ] 00:10:53.631 }' 00:10:53.631 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.632 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.202 [2024-10-09 01:30:52.864480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.202 [2024-10-09 01:30:52.864546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.202 [2024-10-09 01:30:52.872511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.202 [2024-10-09 01:30:52.874609] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.202 [2024-10-09 01:30:52.874682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.202 [2024-10-09 01:30:52.874698] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.202 [2024-10-09 01:30:52.874705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.202 [2024-10-09 01:30:52.874712] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.202 [2024-10-09 01:30:52.874719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.202 "name": "Existed_Raid", 00:10:54.202 "uuid": "8467f9b7-a561-4bda-b63b-6baad6b35e26", 00:10:54.202 "strip_size_kb": 0, 00:10:54.202 "state": "configuring", 00:10:54.202 "raid_level": "raid1", 00:10:54.202 "superblock": true, 00:10:54.202 "num_base_bdevs": 4, 00:10:54.202 "num_base_bdevs_discovered": 1, 00:10:54.202 "num_base_bdevs_operational": 4, 00:10:54.202 "base_bdevs_list": [ 00:10:54.202 { 00:10:54.202 "name": "BaseBdev1", 00:10:54.202 "uuid": "36271054-032b-4b3c-8972-55a8e2d6a755", 00:10:54.202 "is_configured": true, 00:10:54.202 "data_offset": 2048, 00:10:54.202 "data_size": 63488 00:10:54.202 }, 00:10:54.202 { 00:10:54.202 "name": "BaseBdev2", 00:10:54.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.202 "is_configured": false, 00:10:54.202 "data_offset": 0, 00:10:54.202 "data_size": 0 00:10:54.202 }, 00:10:54.202 { 00:10:54.202 "name": "BaseBdev3", 00:10:54.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.202 "is_configured": false, 00:10:54.202 "data_offset": 0, 00:10:54.202 "data_size": 0 00:10:54.202 }, 00:10:54.202 { 00:10:54.202 "name": "BaseBdev4", 00:10:54.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.202 "is_configured": false, 00:10:54.202 "data_offset": 0, 00:10:54.202 "data_size": 0 00:10:54.202 } 00:10:54.202 ] 00:10:54.202 }' 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.202 01:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.462 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.462 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.462 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.722 [2024-10-09 01:30:53.358884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.722 BaseBdev2 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.722 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.722 [ 00:10:54.722 { 00:10:54.722 "name": "BaseBdev2", 00:10:54.722 "aliases": [ 00:10:54.722 "3ca0ac9b-b5d2-4216-a9be-ee04d78d6c06" 00:10:54.722 ], 00:10:54.722 "product_name": "Malloc disk", 00:10:54.722 "block_size": 512, 00:10:54.722 "num_blocks": 65536, 00:10:54.722 "uuid": "3ca0ac9b-b5d2-4216-a9be-ee04d78d6c06", 00:10:54.722 "assigned_rate_limits": { 00:10:54.722 "rw_ios_per_sec": 0, 00:10:54.722 "rw_mbytes_per_sec": 0, 00:10:54.722 "r_mbytes_per_sec": 0, 00:10:54.722 "w_mbytes_per_sec": 0 00:10:54.722 }, 00:10:54.722 "claimed": true, 00:10:54.722 "claim_type": "exclusive_write", 00:10:54.722 "zoned": false, 00:10:54.722 "supported_io_types": { 00:10:54.722 "read": true, 00:10:54.722 "write": true, 00:10:54.722 "unmap": true, 00:10:54.722 "flush": true, 00:10:54.722 "reset": true, 00:10:54.722 "nvme_admin": false, 00:10:54.722 "nvme_io": false, 00:10:54.722 "nvme_io_md": false, 00:10:54.722 "write_zeroes": true, 00:10:54.722 "zcopy": true, 00:10:54.722 "get_zone_info": false, 00:10:54.722 "zone_management": false, 00:10:54.722 "zone_append": false, 00:10:54.722 "compare": false, 00:10:54.722 "compare_and_write": false, 00:10:54.722 "abort": true, 00:10:54.722 "seek_hole": false, 00:10:54.722 "seek_data": false, 00:10:54.722 "copy": true, 00:10:54.722 "nvme_iov_md": false 00:10:54.722 }, 00:10:54.723 "memory_domains": [ 00:10:54.723 { 00:10:54.723 "dma_device_id": "system", 00:10:54.723 "dma_device_type": 1 00:10:54.723 }, 00:10:54.723 { 00:10:54.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.723 "dma_device_type": 2 00:10:54.723 } 00:10:54.723 ], 00:10:54.723 "driver_specific": {} 00:10:54.723 } 00:10:54.723 ] 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.723 "name": "Existed_Raid", 00:10:54.723 "uuid": "8467f9b7-a561-4bda-b63b-6baad6b35e26", 00:10:54.723 "strip_size_kb": 0, 00:10:54.723 "state": "configuring", 00:10:54.723 "raid_level": "raid1", 00:10:54.723 "superblock": true, 00:10:54.723 "num_base_bdevs": 4, 00:10:54.723 "num_base_bdevs_discovered": 2, 00:10:54.723 "num_base_bdevs_operational": 4, 00:10:54.723 "base_bdevs_list": [ 00:10:54.723 { 00:10:54.723 "name": "BaseBdev1", 00:10:54.723 "uuid": "36271054-032b-4b3c-8972-55a8e2d6a755", 00:10:54.723 "is_configured": true, 00:10:54.723 "data_offset": 2048, 00:10:54.723 "data_size": 63488 00:10:54.723 }, 00:10:54.723 { 00:10:54.723 "name": "BaseBdev2", 00:10:54.723 "uuid": "3ca0ac9b-b5d2-4216-a9be-ee04d78d6c06", 00:10:54.723 "is_configured": true, 00:10:54.723 "data_offset": 2048, 00:10:54.723 "data_size": 63488 00:10:54.723 }, 00:10:54.723 { 00:10:54.723 "name": "BaseBdev3", 00:10:54.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.723 "is_configured": false, 00:10:54.723 "data_offset": 0, 00:10:54.723 "data_size": 0 00:10:54.723 }, 00:10:54.723 { 00:10:54.723 "name": "BaseBdev4", 00:10:54.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.723 "is_configured": false, 00:10:54.723 "data_offset": 0, 00:10:54.723 "data_size": 0 00:10:54.723 } 00:10:54.723 ] 00:10:54.723 }' 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.723 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.983 [2024-10-09 01:30:53.855614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.983 BaseBdev3 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.983 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.244 [ 00:10:55.244 { 00:10:55.244 "name": "BaseBdev3", 00:10:55.244 "aliases": [ 00:10:55.244 "f7ed84cf-e5dc-4d27-ba31-a4f2bf7c423f" 00:10:55.244 ], 00:10:55.244 "product_name": "Malloc disk", 00:10:55.244 "block_size": 512, 00:10:55.244 "num_blocks": 65536, 00:10:55.244 "uuid": "f7ed84cf-e5dc-4d27-ba31-a4f2bf7c423f", 00:10:55.244 "assigned_rate_limits": { 00:10:55.244 "rw_ios_per_sec": 0, 00:10:55.244 "rw_mbytes_per_sec": 0, 00:10:55.244 "r_mbytes_per_sec": 0, 00:10:55.244 "w_mbytes_per_sec": 0 00:10:55.244 }, 00:10:55.244 "claimed": true, 00:10:55.244 "claim_type": "exclusive_write", 00:10:55.244 "zoned": false, 00:10:55.244 "supported_io_types": { 00:10:55.244 "read": true, 00:10:55.244 "write": true, 00:10:55.244 "unmap": true, 00:10:55.244 "flush": true, 00:10:55.244 "reset": true, 00:10:55.244 "nvme_admin": false, 00:10:55.244 "nvme_io": false, 00:10:55.244 "nvme_io_md": false, 00:10:55.244 "write_zeroes": true, 00:10:55.244 "zcopy": true, 00:10:55.244 "get_zone_info": false, 00:10:55.244 "zone_management": false, 00:10:55.244 "zone_append": false, 00:10:55.244 "compare": false, 00:10:55.244 "compare_and_write": false, 00:10:55.244 "abort": true, 00:10:55.244 "seek_hole": false, 00:10:55.244 "seek_data": false, 00:10:55.244 "copy": true, 00:10:55.244 "nvme_iov_md": false 00:10:55.244 }, 00:10:55.244 "memory_domains": [ 00:10:55.244 { 00:10:55.244 "dma_device_id": "system", 00:10:55.244 "dma_device_type": 1 00:10:55.244 }, 00:10:55.244 { 00:10:55.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.244 "dma_device_type": 2 00:10:55.244 } 00:10:55.244 ], 00:10:55.244 "driver_specific": {} 00:10:55.244 } 00:10:55.244 ] 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.244 "name": "Existed_Raid", 00:10:55.244 "uuid": "8467f9b7-a561-4bda-b63b-6baad6b35e26", 00:10:55.244 "strip_size_kb": 0, 00:10:55.244 "state": "configuring", 00:10:55.244 "raid_level": "raid1", 00:10:55.244 "superblock": true, 00:10:55.244 "num_base_bdevs": 4, 00:10:55.244 "num_base_bdevs_discovered": 3, 00:10:55.244 "num_base_bdevs_operational": 4, 00:10:55.244 "base_bdevs_list": [ 00:10:55.244 { 00:10:55.244 "name": "BaseBdev1", 00:10:55.244 "uuid": "36271054-032b-4b3c-8972-55a8e2d6a755", 00:10:55.244 "is_configured": true, 00:10:55.244 "data_offset": 2048, 00:10:55.244 "data_size": 63488 00:10:55.244 }, 00:10:55.244 { 00:10:55.244 "name": "BaseBdev2", 00:10:55.244 "uuid": "3ca0ac9b-b5d2-4216-a9be-ee04d78d6c06", 00:10:55.244 "is_configured": true, 00:10:55.244 "data_offset": 2048, 00:10:55.244 "data_size": 63488 00:10:55.244 }, 00:10:55.244 { 00:10:55.244 "name": "BaseBdev3", 00:10:55.244 "uuid": "f7ed84cf-e5dc-4d27-ba31-a4f2bf7c423f", 00:10:55.244 "is_configured": true, 00:10:55.244 "data_offset": 2048, 00:10:55.244 "data_size": 63488 00:10:55.244 }, 00:10:55.244 { 00:10:55.244 "name": "BaseBdev4", 00:10:55.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.244 "is_configured": false, 00:10:55.244 "data_offset": 0, 00:10:55.244 "data_size": 0 00:10:55.244 } 00:10:55.244 ] 00:10:55.244 }' 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.244 01:30:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.504 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:55.504 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.504 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.504 [2024-10-09 01:30:54.392467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.504 [2024-10-09 01:30:54.392727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.504 [2024-10-09 01:30:54.392755] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.504 BaseBdev4 00:10:55.504 [2024-10-09 01:30:54.393093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:55.504 [2024-10-09 01:30:54.393257] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.504 [2024-10-09 01:30:54.393269] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.505 [2024-10-09 01:30:54.393431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.505 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 [ 00:10:55.765 { 00:10:55.765 "name": "BaseBdev4", 00:10:55.765 "aliases": [ 00:10:55.765 "7f8dea80-bfcb-4823-90f9-b12ffd03d8a7" 00:10:55.765 ], 00:10:55.765 "product_name": "Malloc disk", 00:10:55.765 "block_size": 512, 00:10:55.765 "num_blocks": 65536, 00:10:55.765 "uuid": "7f8dea80-bfcb-4823-90f9-b12ffd03d8a7", 00:10:55.765 "assigned_rate_limits": { 00:10:55.765 "rw_ios_per_sec": 0, 00:10:55.765 "rw_mbytes_per_sec": 0, 00:10:55.765 "r_mbytes_per_sec": 0, 00:10:55.765 "w_mbytes_per_sec": 0 00:10:55.765 }, 00:10:55.765 "claimed": true, 00:10:55.765 "claim_type": "exclusive_write", 00:10:55.765 "zoned": false, 00:10:55.765 "supported_io_types": { 00:10:55.765 "read": true, 00:10:55.765 "write": true, 00:10:55.765 "unmap": true, 00:10:55.765 "flush": true, 00:10:55.765 "reset": true, 00:10:55.765 "nvme_admin": false, 00:10:55.765 "nvme_io": false, 00:10:55.765 "nvme_io_md": false, 00:10:55.765 "write_zeroes": true, 00:10:55.765 "zcopy": true, 00:10:55.765 "get_zone_info": false, 00:10:55.765 "zone_management": false, 00:10:55.765 "zone_append": false, 00:10:55.765 "compare": false, 00:10:55.765 "compare_and_write": false, 00:10:55.765 "abort": true, 00:10:55.765 "seek_hole": false, 00:10:55.765 "seek_data": false, 00:10:55.765 "copy": true, 00:10:55.765 "nvme_iov_md": false 00:10:55.765 }, 00:10:55.765 "memory_domains": [ 00:10:55.765 { 00:10:55.765 "dma_device_id": "system", 00:10:55.765 "dma_device_type": 1 00:10:55.765 }, 00:10:55.765 { 00:10:55.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.765 "dma_device_type": 2 00:10:55.765 } 00:10:55.765 ], 00:10:55.765 "driver_specific": {} 00:10:55.765 } 00:10:55.765 ] 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.765 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.765 "name": "Existed_Raid", 00:10:55.765 "uuid": "8467f9b7-a561-4bda-b63b-6baad6b35e26", 00:10:55.765 "strip_size_kb": 0, 00:10:55.765 "state": "online", 00:10:55.765 "raid_level": "raid1", 00:10:55.765 "superblock": true, 00:10:55.765 "num_base_bdevs": 4, 00:10:55.765 "num_base_bdevs_discovered": 4, 00:10:55.765 "num_base_bdevs_operational": 4, 00:10:55.765 "base_bdevs_list": [ 00:10:55.765 { 00:10:55.765 "name": "BaseBdev1", 00:10:55.765 "uuid": "36271054-032b-4b3c-8972-55a8e2d6a755", 00:10:55.765 "is_configured": true, 00:10:55.765 "data_offset": 2048, 00:10:55.765 "data_size": 63488 00:10:55.765 }, 00:10:55.765 { 00:10:55.765 "name": "BaseBdev2", 00:10:55.765 "uuid": "3ca0ac9b-b5d2-4216-a9be-ee04d78d6c06", 00:10:55.765 "is_configured": true, 00:10:55.765 "data_offset": 2048, 00:10:55.765 "data_size": 63488 00:10:55.766 }, 00:10:55.766 { 00:10:55.766 "name": "BaseBdev3", 00:10:55.766 "uuid": "f7ed84cf-e5dc-4d27-ba31-a4f2bf7c423f", 00:10:55.766 "is_configured": true, 00:10:55.766 "data_offset": 2048, 00:10:55.766 "data_size": 63488 00:10:55.766 }, 00:10:55.766 { 00:10:55.766 "name": "BaseBdev4", 00:10:55.766 "uuid": "7f8dea80-bfcb-4823-90f9-b12ffd03d8a7", 00:10:55.766 "is_configured": true, 00:10:55.766 "data_offset": 2048, 00:10:55.766 "data_size": 63488 00:10:55.766 } 00:10:55.766 ] 00:10:55.766 }' 00:10:55.766 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.766 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.025 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.025 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.025 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.025 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.025 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.025 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.025 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.025 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.026 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.026 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.026 [2024-10-09 01:30:54.860932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.026 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.026 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.026 "name": "Existed_Raid", 00:10:56.026 "aliases": [ 00:10:56.026 "8467f9b7-a561-4bda-b63b-6baad6b35e26" 00:10:56.026 ], 00:10:56.026 "product_name": "Raid Volume", 00:10:56.026 "block_size": 512, 00:10:56.026 "num_blocks": 63488, 00:10:56.026 "uuid": "8467f9b7-a561-4bda-b63b-6baad6b35e26", 00:10:56.026 "assigned_rate_limits": { 00:10:56.026 "rw_ios_per_sec": 0, 00:10:56.026 "rw_mbytes_per_sec": 0, 00:10:56.026 "r_mbytes_per_sec": 0, 00:10:56.026 "w_mbytes_per_sec": 0 00:10:56.026 }, 00:10:56.026 "claimed": false, 00:10:56.026 "zoned": false, 00:10:56.026 "supported_io_types": { 00:10:56.026 "read": true, 00:10:56.026 "write": true, 00:10:56.026 "unmap": false, 00:10:56.026 "flush": false, 00:10:56.026 "reset": true, 00:10:56.026 "nvme_admin": false, 00:10:56.026 "nvme_io": false, 00:10:56.026 "nvme_io_md": false, 00:10:56.026 "write_zeroes": true, 00:10:56.026 "zcopy": false, 00:10:56.026 "get_zone_info": false, 00:10:56.026 "zone_management": false, 00:10:56.026 "zone_append": false, 00:10:56.026 "compare": false, 00:10:56.026 "compare_and_write": false, 00:10:56.026 "abort": false, 00:10:56.026 "seek_hole": false, 00:10:56.026 "seek_data": false, 00:10:56.026 "copy": false, 00:10:56.026 "nvme_iov_md": false 00:10:56.026 }, 00:10:56.026 "memory_domains": [ 00:10:56.026 { 00:10:56.026 "dma_device_id": "system", 00:10:56.026 "dma_device_type": 1 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.026 "dma_device_type": 2 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "dma_device_id": "system", 00:10:56.026 "dma_device_type": 1 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.026 "dma_device_type": 2 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "dma_device_id": "system", 00:10:56.026 "dma_device_type": 1 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.026 "dma_device_type": 2 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "dma_device_id": "system", 00:10:56.026 "dma_device_type": 1 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.026 "dma_device_type": 2 00:10:56.026 } 00:10:56.026 ], 00:10:56.026 "driver_specific": { 00:10:56.026 "raid": { 00:10:56.026 "uuid": "8467f9b7-a561-4bda-b63b-6baad6b35e26", 00:10:56.026 "strip_size_kb": 0, 00:10:56.026 "state": "online", 00:10:56.026 "raid_level": "raid1", 00:10:56.026 "superblock": true, 00:10:56.026 "num_base_bdevs": 4, 00:10:56.026 "num_base_bdevs_discovered": 4, 00:10:56.026 "num_base_bdevs_operational": 4, 00:10:56.026 "base_bdevs_list": [ 00:10:56.026 { 00:10:56.026 "name": "BaseBdev1", 00:10:56.026 "uuid": "36271054-032b-4b3c-8972-55a8e2d6a755", 00:10:56.026 "is_configured": true, 00:10:56.026 "data_offset": 2048, 00:10:56.026 "data_size": 63488 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "name": "BaseBdev2", 00:10:56.026 "uuid": "3ca0ac9b-b5d2-4216-a9be-ee04d78d6c06", 00:10:56.026 "is_configured": true, 00:10:56.026 "data_offset": 2048, 00:10:56.026 "data_size": 63488 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "name": "BaseBdev3", 00:10:56.026 "uuid": "f7ed84cf-e5dc-4d27-ba31-a4f2bf7c423f", 00:10:56.026 "is_configured": true, 00:10:56.026 "data_offset": 2048, 00:10:56.026 "data_size": 63488 00:10:56.026 }, 00:10:56.026 { 00:10:56.026 "name": "BaseBdev4", 00:10:56.026 "uuid": "7f8dea80-bfcb-4823-90f9-b12ffd03d8a7", 00:10:56.026 "is_configured": true, 00:10:56.026 "data_offset": 2048, 00:10:56.026 "data_size": 63488 00:10:56.026 } 00:10:56.026 ] 00:10:56.026 } 00:10:56.026 } 00:10:56.026 }' 00:10:56.026 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.286 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.286 BaseBdev2 00:10:56.286 BaseBdev3 00:10:56.286 BaseBdev4' 00:10:56.286 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.286 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.286 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.286 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.286 01:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.286 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.286 01:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.286 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.546 [2024-10-09 01:30:55.180795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.546 "name": "Existed_Raid", 00:10:56.546 "uuid": "8467f9b7-a561-4bda-b63b-6baad6b35e26", 00:10:56.546 "strip_size_kb": 0, 00:10:56.546 "state": "online", 00:10:56.546 "raid_level": "raid1", 00:10:56.546 "superblock": true, 00:10:56.546 "num_base_bdevs": 4, 00:10:56.546 "num_base_bdevs_discovered": 3, 00:10:56.546 "num_base_bdevs_operational": 3, 00:10:56.546 "base_bdevs_list": [ 00:10:56.546 { 00:10:56.546 "name": null, 00:10:56.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.546 "is_configured": false, 00:10:56.546 "data_offset": 0, 00:10:56.546 "data_size": 63488 00:10:56.546 }, 00:10:56.546 { 00:10:56.546 "name": "BaseBdev2", 00:10:56.546 "uuid": "3ca0ac9b-b5d2-4216-a9be-ee04d78d6c06", 00:10:56.546 "is_configured": true, 00:10:56.546 "data_offset": 2048, 00:10:56.546 "data_size": 63488 00:10:56.546 }, 00:10:56.546 { 00:10:56.546 "name": "BaseBdev3", 00:10:56.546 "uuid": "f7ed84cf-e5dc-4d27-ba31-a4f2bf7c423f", 00:10:56.546 "is_configured": true, 00:10:56.546 "data_offset": 2048, 00:10:56.546 "data_size": 63488 00:10:56.546 }, 00:10:56.546 { 00:10:56.546 "name": "BaseBdev4", 00:10:56.546 "uuid": "7f8dea80-bfcb-4823-90f9-b12ffd03d8a7", 00:10:56.546 "is_configured": true, 00:10:56.546 "data_offset": 2048, 00:10:56.546 "data_size": 63488 00:10:56.546 } 00:10:56.546 ] 00:10:56.546 }' 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.546 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.805 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.065 [2024-10-09 01:30:55.697314] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.065 [2024-10-09 01:30:55.777624] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.065 [2024-10-09 01:30:55.857491] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:57.065 [2024-10-09 01:30:55.857624] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.065 [2024-10-09 01:30:55.878152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.065 [2024-10-09 01:30:55.878209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.065 [2024-10-09 01:30:55.878221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.065 BaseBdev2 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.065 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.325 [ 00:10:57.325 { 00:10:57.325 "name": "BaseBdev2", 00:10:57.325 "aliases": [ 00:10:57.325 "b6f6f636-fed6-4e61-acf3-4ed636804f4e" 00:10:57.325 ], 00:10:57.325 "product_name": "Malloc disk", 00:10:57.325 "block_size": 512, 00:10:57.325 "num_blocks": 65536, 00:10:57.325 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:10:57.325 "assigned_rate_limits": { 00:10:57.325 "rw_ios_per_sec": 0, 00:10:57.325 "rw_mbytes_per_sec": 0, 00:10:57.325 "r_mbytes_per_sec": 0, 00:10:57.325 "w_mbytes_per_sec": 0 00:10:57.325 }, 00:10:57.325 "claimed": false, 00:10:57.325 "zoned": false, 00:10:57.325 "supported_io_types": { 00:10:57.325 "read": true, 00:10:57.325 "write": true, 00:10:57.325 "unmap": true, 00:10:57.325 "flush": true, 00:10:57.325 "reset": true, 00:10:57.325 "nvme_admin": false, 00:10:57.325 "nvme_io": false, 00:10:57.325 "nvme_io_md": false, 00:10:57.325 "write_zeroes": true, 00:10:57.325 "zcopy": true, 00:10:57.325 "get_zone_info": false, 00:10:57.325 "zone_management": false, 00:10:57.325 "zone_append": false, 00:10:57.325 "compare": false, 00:10:57.325 "compare_and_write": false, 00:10:57.325 "abort": true, 00:10:57.325 "seek_hole": false, 00:10:57.325 "seek_data": false, 00:10:57.325 "copy": true, 00:10:57.325 "nvme_iov_md": false 00:10:57.325 }, 00:10:57.325 "memory_domains": [ 00:10:57.325 { 00:10:57.325 "dma_device_id": "system", 00:10:57.325 "dma_device_type": 1 00:10:57.325 }, 00:10:57.325 { 00:10:57.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.325 "dma_device_type": 2 00:10:57.325 } 00:10:57.325 ], 00:10:57.325 "driver_specific": {} 00:10:57.325 } 00:10:57.325 ] 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.325 01:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.325 BaseBdev3 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.325 [ 00:10:57.325 { 00:10:57.325 "name": "BaseBdev3", 00:10:57.325 "aliases": [ 00:10:57.325 "fd59a735-4022-4427-b996-2eec19ea0a10" 00:10:57.325 ], 00:10:57.325 "product_name": "Malloc disk", 00:10:57.325 "block_size": 512, 00:10:57.325 "num_blocks": 65536, 00:10:57.325 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:10:57.325 "assigned_rate_limits": { 00:10:57.325 "rw_ios_per_sec": 0, 00:10:57.325 "rw_mbytes_per_sec": 0, 00:10:57.325 "r_mbytes_per_sec": 0, 00:10:57.325 "w_mbytes_per_sec": 0 00:10:57.325 }, 00:10:57.325 "claimed": false, 00:10:57.325 "zoned": false, 00:10:57.325 "supported_io_types": { 00:10:57.325 "read": true, 00:10:57.325 "write": true, 00:10:57.325 "unmap": true, 00:10:57.325 "flush": true, 00:10:57.325 "reset": true, 00:10:57.325 "nvme_admin": false, 00:10:57.325 "nvme_io": false, 00:10:57.325 "nvme_io_md": false, 00:10:57.325 "write_zeroes": true, 00:10:57.325 "zcopy": true, 00:10:57.325 "get_zone_info": false, 00:10:57.325 "zone_management": false, 00:10:57.325 "zone_append": false, 00:10:57.325 "compare": false, 00:10:57.325 "compare_and_write": false, 00:10:57.325 "abort": true, 00:10:57.325 "seek_hole": false, 00:10:57.325 "seek_data": false, 00:10:57.325 "copy": true, 00:10:57.325 "nvme_iov_md": false 00:10:57.325 }, 00:10:57.325 "memory_domains": [ 00:10:57.325 { 00:10:57.325 "dma_device_id": "system", 00:10:57.325 "dma_device_type": 1 00:10:57.325 }, 00:10:57.325 { 00:10:57.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.325 "dma_device_type": 2 00:10:57.325 } 00:10:57.325 ], 00:10:57.325 "driver_specific": {} 00:10:57.325 } 00:10:57.325 ] 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.325 BaseBdev4 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.325 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.325 [ 00:10:57.325 { 00:10:57.325 "name": "BaseBdev4", 00:10:57.326 "aliases": [ 00:10:57.326 "7ac77dc7-4267-4417-99da-8c78c9e46b46" 00:10:57.326 ], 00:10:57.326 "product_name": "Malloc disk", 00:10:57.326 "block_size": 512, 00:10:57.326 "num_blocks": 65536, 00:10:57.326 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:10:57.326 "assigned_rate_limits": { 00:10:57.326 "rw_ios_per_sec": 0, 00:10:57.326 "rw_mbytes_per_sec": 0, 00:10:57.326 "r_mbytes_per_sec": 0, 00:10:57.326 "w_mbytes_per_sec": 0 00:10:57.326 }, 00:10:57.326 "claimed": false, 00:10:57.326 "zoned": false, 00:10:57.326 "supported_io_types": { 00:10:57.326 "read": true, 00:10:57.326 "write": true, 00:10:57.326 "unmap": true, 00:10:57.326 "flush": true, 00:10:57.326 "reset": true, 00:10:57.326 "nvme_admin": false, 00:10:57.326 "nvme_io": false, 00:10:57.326 "nvme_io_md": false, 00:10:57.326 "write_zeroes": true, 00:10:57.326 "zcopy": true, 00:10:57.326 "get_zone_info": false, 00:10:57.326 "zone_management": false, 00:10:57.326 "zone_append": false, 00:10:57.326 "compare": false, 00:10:57.326 "compare_and_write": false, 00:10:57.326 "abort": true, 00:10:57.326 "seek_hole": false, 00:10:57.326 "seek_data": false, 00:10:57.326 "copy": true, 00:10:57.326 "nvme_iov_md": false 00:10:57.326 }, 00:10:57.326 "memory_domains": [ 00:10:57.326 { 00:10:57.326 "dma_device_id": "system", 00:10:57.326 "dma_device_type": 1 00:10:57.326 }, 00:10:57.326 { 00:10:57.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.326 "dma_device_type": 2 00:10:57.326 } 00:10:57.326 ], 00:10:57.326 "driver_specific": {} 00:10:57.326 } 00:10:57.326 ] 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.326 [2024-10-09 01:30:56.106278] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.326 [2024-10-09 01:30:56.106343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.326 [2024-10-09 01:30:56.106366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.326 [2024-10-09 01:30:56.108387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.326 [2024-10-09 01:30:56.108459] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.326 "name": "Existed_Raid", 00:10:57.326 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:10:57.326 "strip_size_kb": 0, 00:10:57.326 "state": "configuring", 00:10:57.326 "raid_level": "raid1", 00:10:57.326 "superblock": true, 00:10:57.326 "num_base_bdevs": 4, 00:10:57.326 "num_base_bdevs_discovered": 3, 00:10:57.326 "num_base_bdevs_operational": 4, 00:10:57.326 "base_bdevs_list": [ 00:10:57.326 { 00:10:57.326 "name": "BaseBdev1", 00:10:57.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.326 "is_configured": false, 00:10:57.326 "data_offset": 0, 00:10:57.326 "data_size": 0 00:10:57.326 }, 00:10:57.326 { 00:10:57.326 "name": "BaseBdev2", 00:10:57.326 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:10:57.326 "is_configured": true, 00:10:57.326 "data_offset": 2048, 00:10:57.326 "data_size": 63488 00:10:57.326 }, 00:10:57.326 { 00:10:57.326 "name": "BaseBdev3", 00:10:57.326 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:10:57.326 "is_configured": true, 00:10:57.326 "data_offset": 2048, 00:10:57.326 "data_size": 63488 00:10:57.326 }, 00:10:57.326 { 00:10:57.326 "name": "BaseBdev4", 00:10:57.326 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:10:57.326 "is_configured": true, 00:10:57.326 "data_offset": 2048, 00:10:57.326 "data_size": 63488 00:10:57.326 } 00:10:57.326 ] 00:10:57.326 }' 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.326 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.895 [2024-10-09 01:30:56.550373] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.895 "name": "Existed_Raid", 00:10:57.895 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:10:57.895 "strip_size_kb": 0, 00:10:57.895 "state": "configuring", 00:10:57.895 "raid_level": "raid1", 00:10:57.895 "superblock": true, 00:10:57.895 "num_base_bdevs": 4, 00:10:57.895 "num_base_bdevs_discovered": 2, 00:10:57.895 "num_base_bdevs_operational": 4, 00:10:57.895 "base_bdevs_list": [ 00:10:57.895 { 00:10:57.895 "name": "BaseBdev1", 00:10:57.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.895 "is_configured": false, 00:10:57.895 "data_offset": 0, 00:10:57.895 "data_size": 0 00:10:57.895 }, 00:10:57.895 { 00:10:57.895 "name": null, 00:10:57.895 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:10:57.895 "is_configured": false, 00:10:57.895 "data_offset": 0, 00:10:57.895 "data_size": 63488 00:10:57.895 }, 00:10:57.895 { 00:10:57.895 "name": "BaseBdev3", 00:10:57.895 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:10:57.895 "is_configured": true, 00:10:57.895 "data_offset": 2048, 00:10:57.895 "data_size": 63488 00:10:57.895 }, 00:10:57.895 { 00:10:57.895 "name": "BaseBdev4", 00:10:57.895 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:10:57.895 "is_configured": true, 00:10:57.895 "data_offset": 2048, 00:10:57.895 "data_size": 63488 00:10:57.895 } 00:10:57.895 ] 00:10:57.895 }' 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.895 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.155 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.155 01:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.155 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.155 01:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.155 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.155 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.155 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.155 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.155 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 [2024-10-09 01:30:57.059285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.415 BaseBdev1 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 [ 00:10:58.415 { 00:10:58.415 "name": "BaseBdev1", 00:10:58.415 "aliases": [ 00:10:58.415 "884ceed1-3fad-4083-957e-27acfbf2cbb1" 00:10:58.415 ], 00:10:58.415 "product_name": "Malloc disk", 00:10:58.415 "block_size": 512, 00:10:58.415 "num_blocks": 65536, 00:10:58.415 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:10:58.415 "assigned_rate_limits": { 00:10:58.415 "rw_ios_per_sec": 0, 00:10:58.415 "rw_mbytes_per_sec": 0, 00:10:58.415 "r_mbytes_per_sec": 0, 00:10:58.415 "w_mbytes_per_sec": 0 00:10:58.415 }, 00:10:58.415 "claimed": true, 00:10:58.415 "claim_type": "exclusive_write", 00:10:58.415 "zoned": false, 00:10:58.415 "supported_io_types": { 00:10:58.415 "read": true, 00:10:58.415 "write": true, 00:10:58.415 "unmap": true, 00:10:58.415 "flush": true, 00:10:58.415 "reset": true, 00:10:58.415 "nvme_admin": false, 00:10:58.415 "nvme_io": false, 00:10:58.415 "nvme_io_md": false, 00:10:58.415 "write_zeroes": true, 00:10:58.415 "zcopy": true, 00:10:58.415 "get_zone_info": false, 00:10:58.415 "zone_management": false, 00:10:58.415 "zone_append": false, 00:10:58.415 "compare": false, 00:10:58.415 "compare_and_write": false, 00:10:58.415 "abort": true, 00:10:58.415 "seek_hole": false, 00:10:58.415 "seek_data": false, 00:10:58.415 "copy": true, 00:10:58.415 "nvme_iov_md": false 00:10:58.415 }, 00:10:58.415 "memory_domains": [ 00:10:58.415 { 00:10:58.415 "dma_device_id": "system", 00:10:58.415 "dma_device_type": 1 00:10:58.415 }, 00:10:58.415 { 00:10:58.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.415 "dma_device_type": 2 00:10:58.415 } 00:10:58.415 ], 00:10:58.415 "driver_specific": {} 00:10:58.415 } 00:10:58.415 ] 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.415 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.415 "name": "Existed_Raid", 00:10:58.415 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:10:58.415 "strip_size_kb": 0, 00:10:58.415 "state": "configuring", 00:10:58.415 "raid_level": "raid1", 00:10:58.415 "superblock": true, 00:10:58.415 "num_base_bdevs": 4, 00:10:58.415 "num_base_bdevs_discovered": 3, 00:10:58.415 "num_base_bdevs_operational": 4, 00:10:58.416 "base_bdevs_list": [ 00:10:58.416 { 00:10:58.416 "name": "BaseBdev1", 00:10:58.416 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:10:58.416 "is_configured": true, 00:10:58.416 "data_offset": 2048, 00:10:58.416 "data_size": 63488 00:10:58.416 }, 00:10:58.416 { 00:10:58.416 "name": null, 00:10:58.416 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:10:58.416 "is_configured": false, 00:10:58.416 "data_offset": 0, 00:10:58.416 "data_size": 63488 00:10:58.416 }, 00:10:58.416 { 00:10:58.416 "name": "BaseBdev3", 00:10:58.416 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:10:58.416 "is_configured": true, 00:10:58.416 "data_offset": 2048, 00:10:58.416 "data_size": 63488 00:10:58.416 }, 00:10:58.416 { 00:10:58.416 "name": "BaseBdev4", 00:10:58.416 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:10:58.416 "is_configured": true, 00:10:58.416 "data_offset": 2048, 00:10:58.416 "data_size": 63488 00:10:58.416 } 00:10:58.416 ] 00:10:58.416 }' 00:10:58.416 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.416 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.675 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.675 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.675 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.675 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.675 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.936 [2024-10-09 01:30:57.583465] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.936 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.936 "name": "Existed_Raid", 00:10:58.936 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:10:58.936 "strip_size_kb": 0, 00:10:58.936 "state": "configuring", 00:10:58.936 "raid_level": "raid1", 00:10:58.936 "superblock": true, 00:10:58.936 "num_base_bdevs": 4, 00:10:58.936 "num_base_bdevs_discovered": 2, 00:10:58.936 "num_base_bdevs_operational": 4, 00:10:58.936 "base_bdevs_list": [ 00:10:58.936 { 00:10:58.936 "name": "BaseBdev1", 00:10:58.936 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:10:58.936 "is_configured": true, 00:10:58.936 "data_offset": 2048, 00:10:58.936 "data_size": 63488 00:10:58.936 }, 00:10:58.936 { 00:10:58.936 "name": null, 00:10:58.936 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:10:58.936 "is_configured": false, 00:10:58.936 "data_offset": 0, 00:10:58.936 "data_size": 63488 00:10:58.936 }, 00:10:58.936 { 00:10:58.936 "name": null, 00:10:58.936 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:10:58.936 "is_configured": false, 00:10:58.937 "data_offset": 0, 00:10:58.937 "data_size": 63488 00:10:58.937 }, 00:10:58.937 { 00:10:58.937 "name": "BaseBdev4", 00:10:58.937 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:10:58.937 "is_configured": true, 00:10:58.937 "data_offset": 2048, 00:10:58.937 "data_size": 63488 00:10:58.937 } 00:10:58.937 ] 00:10:58.937 }' 00:10:58.937 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.937 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.197 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.197 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.197 01:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.197 01:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.197 [2024-10-09 01:30:58.035645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.197 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.457 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.457 "name": "Existed_Raid", 00:10:59.457 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:10:59.457 "strip_size_kb": 0, 00:10:59.457 "state": "configuring", 00:10:59.457 "raid_level": "raid1", 00:10:59.457 "superblock": true, 00:10:59.457 "num_base_bdevs": 4, 00:10:59.457 "num_base_bdevs_discovered": 3, 00:10:59.457 "num_base_bdevs_operational": 4, 00:10:59.457 "base_bdevs_list": [ 00:10:59.457 { 00:10:59.457 "name": "BaseBdev1", 00:10:59.457 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:10:59.457 "is_configured": true, 00:10:59.457 "data_offset": 2048, 00:10:59.457 "data_size": 63488 00:10:59.457 }, 00:10:59.457 { 00:10:59.457 "name": null, 00:10:59.457 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:10:59.457 "is_configured": false, 00:10:59.457 "data_offset": 0, 00:10:59.457 "data_size": 63488 00:10:59.457 }, 00:10:59.457 { 00:10:59.457 "name": "BaseBdev3", 00:10:59.457 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:10:59.457 "is_configured": true, 00:10:59.457 "data_offset": 2048, 00:10:59.457 "data_size": 63488 00:10:59.457 }, 00:10:59.457 { 00:10:59.457 "name": "BaseBdev4", 00:10:59.457 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:10:59.457 "is_configured": true, 00:10:59.457 "data_offset": 2048, 00:10:59.457 "data_size": 63488 00:10:59.457 } 00:10:59.457 ] 00:10:59.457 }' 00:10:59.457 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.457 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.717 [2024-10-09 01:30:58.527800] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.717 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.718 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.718 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.718 "name": "Existed_Raid", 00:10:59.718 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:10:59.718 "strip_size_kb": 0, 00:10:59.718 "state": "configuring", 00:10:59.718 "raid_level": "raid1", 00:10:59.718 "superblock": true, 00:10:59.718 "num_base_bdevs": 4, 00:10:59.718 "num_base_bdevs_discovered": 2, 00:10:59.718 "num_base_bdevs_operational": 4, 00:10:59.718 "base_bdevs_list": [ 00:10:59.718 { 00:10:59.718 "name": null, 00:10:59.718 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:10:59.718 "is_configured": false, 00:10:59.718 "data_offset": 0, 00:10:59.718 "data_size": 63488 00:10:59.718 }, 00:10:59.718 { 00:10:59.718 "name": null, 00:10:59.718 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:10:59.718 "is_configured": false, 00:10:59.718 "data_offset": 0, 00:10:59.718 "data_size": 63488 00:10:59.718 }, 00:10:59.718 { 00:10:59.718 "name": "BaseBdev3", 00:10:59.718 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:10:59.718 "is_configured": true, 00:10:59.718 "data_offset": 2048, 00:10:59.718 "data_size": 63488 00:10:59.718 }, 00:10:59.718 { 00:10:59.718 "name": "BaseBdev4", 00:10:59.718 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:10:59.718 "is_configured": true, 00:10:59.718 "data_offset": 2048, 00:10:59.718 "data_size": 63488 00:10:59.718 } 00:10:59.718 ] 00:10:59.718 }' 00:10:59.718 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.718 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.289 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.289 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.289 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.289 01:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:00.289 01:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.289 [2024-10-09 01:30:59.019368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.289 "name": "Existed_Raid", 00:11:00.289 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:11:00.289 "strip_size_kb": 0, 00:11:00.289 "state": "configuring", 00:11:00.289 "raid_level": "raid1", 00:11:00.289 "superblock": true, 00:11:00.289 "num_base_bdevs": 4, 00:11:00.289 "num_base_bdevs_discovered": 3, 00:11:00.289 "num_base_bdevs_operational": 4, 00:11:00.289 "base_bdevs_list": [ 00:11:00.289 { 00:11:00.289 "name": null, 00:11:00.289 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:11:00.289 "is_configured": false, 00:11:00.289 "data_offset": 0, 00:11:00.289 "data_size": 63488 00:11:00.289 }, 00:11:00.289 { 00:11:00.289 "name": "BaseBdev2", 00:11:00.289 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:11:00.289 "is_configured": true, 00:11:00.289 "data_offset": 2048, 00:11:00.289 "data_size": 63488 00:11:00.289 }, 00:11:00.289 { 00:11:00.289 "name": "BaseBdev3", 00:11:00.289 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:11:00.289 "is_configured": true, 00:11:00.289 "data_offset": 2048, 00:11:00.289 "data_size": 63488 00:11:00.289 }, 00:11:00.289 { 00:11:00.289 "name": "BaseBdev4", 00:11:00.289 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:11:00.289 "is_configured": true, 00:11:00.289 "data_offset": 2048, 00:11:00.289 "data_size": 63488 00:11:00.289 } 00:11:00.289 ] 00:11:00.289 }' 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.289 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.859 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.859 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.859 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.859 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.859 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.859 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:00.859 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 884ceed1-3fad-4083-957e-27acfbf2cbb1 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.860 [2024-10-09 01:30:59.612153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:00.860 [2024-10-09 01:30:59.612395] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:00.860 [2024-10-09 01:30:59.612410] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:00.860 NewBaseBdev 00:11:00.860 [2024-10-09 01:30:59.612742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:11:00.860 [2024-10-09 01:30:59.612894] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:00.860 [2024-10-09 01:30:59.612916] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:00.860 [2024-10-09 01:30:59.613026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.860 [ 00:11:00.860 { 00:11:00.860 "name": "NewBaseBdev", 00:11:00.860 "aliases": [ 00:11:00.860 "884ceed1-3fad-4083-957e-27acfbf2cbb1" 00:11:00.860 ], 00:11:00.860 "product_name": "Malloc disk", 00:11:00.860 "block_size": 512, 00:11:00.860 "num_blocks": 65536, 00:11:00.860 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:11:00.860 "assigned_rate_limits": { 00:11:00.860 "rw_ios_per_sec": 0, 00:11:00.860 "rw_mbytes_per_sec": 0, 00:11:00.860 "r_mbytes_per_sec": 0, 00:11:00.860 "w_mbytes_per_sec": 0 00:11:00.860 }, 00:11:00.860 "claimed": true, 00:11:00.860 "claim_type": "exclusive_write", 00:11:00.860 "zoned": false, 00:11:00.860 "supported_io_types": { 00:11:00.860 "read": true, 00:11:00.860 "write": true, 00:11:00.860 "unmap": true, 00:11:00.860 "flush": true, 00:11:00.860 "reset": true, 00:11:00.860 "nvme_admin": false, 00:11:00.860 "nvme_io": false, 00:11:00.860 "nvme_io_md": false, 00:11:00.860 "write_zeroes": true, 00:11:00.860 "zcopy": true, 00:11:00.860 "get_zone_info": false, 00:11:00.860 "zone_management": false, 00:11:00.860 "zone_append": false, 00:11:00.860 "compare": false, 00:11:00.860 "compare_and_write": false, 00:11:00.860 "abort": true, 00:11:00.860 "seek_hole": false, 00:11:00.860 "seek_data": false, 00:11:00.860 "copy": true, 00:11:00.860 "nvme_iov_md": false 00:11:00.860 }, 00:11:00.860 "memory_domains": [ 00:11:00.860 { 00:11:00.860 "dma_device_id": "system", 00:11:00.860 "dma_device_type": 1 00:11:00.860 }, 00:11:00.860 { 00:11:00.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.860 "dma_device_type": 2 00:11:00.860 } 00:11:00.860 ], 00:11:00.860 "driver_specific": {} 00:11:00.860 } 00:11:00.860 ] 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.860 "name": "Existed_Raid", 00:11:00.860 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:11:00.860 "strip_size_kb": 0, 00:11:00.860 "state": "online", 00:11:00.860 "raid_level": "raid1", 00:11:00.860 "superblock": true, 00:11:00.860 "num_base_bdevs": 4, 00:11:00.860 "num_base_bdevs_discovered": 4, 00:11:00.860 "num_base_bdevs_operational": 4, 00:11:00.860 "base_bdevs_list": [ 00:11:00.860 { 00:11:00.860 "name": "NewBaseBdev", 00:11:00.860 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:11:00.860 "is_configured": true, 00:11:00.860 "data_offset": 2048, 00:11:00.860 "data_size": 63488 00:11:00.860 }, 00:11:00.860 { 00:11:00.860 "name": "BaseBdev2", 00:11:00.860 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:11:00.860 "is_configured": true, 00:11:00.860 "data_offset": 2048, 00:11:00.860 "data_size": 63488 00:11:00.860 }, 00:11:00.860 { 00:11:00.860 "name": "BaseBdev3", 00:11:00.860 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:11:00.860 "is_configured": true, 00:11:00.860 "data_offset": 2048, 00:11:00.860 "data_size": 63488 00:11:00.860 }, 00:11:00.860 { 00:11:00.860 "name": "BaseBdev4", 00:11:00.860 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:11:00.860 "is_configured": true, 00:11:00.860 "data_offset": 2048, 00:11:00.860 "data_size": 63488 00:11:00.860 } 00:11:00.860 ] 00:11:00.860 }' 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.860 01:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.431 [2024-10-09 01:31:00.080684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.431 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.431 "name": "Existed_Raid", 00:11:01.431 "aliases": [ 00:11:01.431 "50d8d4e8-16db-4314-9381-8939f0862d59" 00:11:01.431 ], 00:11:01.431 "product_name": "Raid Volume", 00:11:01.431 "block_size": 512, 00:11:01.431 "num_blocks": 63488, 00:11:01.431 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:11:01.431 "assigned_rate_limits": { 00:11:01.431 "rw_ios_per_sec": 0, 00:11:01.431 "rw_mbytes_per_sec": 0, 00:11:01.431 "r_mbytes_per_sec": 0, 00:11:01.431 "w_mbytes_per_sec": 0 00:11:01.431 }, 00:11:01.431 "claimed": false, 00:11:01.431 "zoned": false, 00:11:01.431 "supported_io_types": { 00:11:01.431 "read": true, 00:11:01.431 "write": true, 00:11:01.431 "unmap": false, 00:11:01.431 "flush": false, 00:11:01.431 "reset": true, 00:11:01.431 "nvme_admin": false, 00:11:01.431 "nvme_io": false, 00:11:01.431 "nvme_io_md": false, 00:11:01.431 "write_zeroes": true, 00:11:01.431 "zcopy": false, 00:11:01.431 "get_zone_info": false, 00:11:01.431 "zone_management": false, 00:11:01.431 "zone_append": false, 00:11:01.431 "compare": false, 00:11:01.431 "compare_and_write": false, 00:11:01.431 "abort": false, 00:11:01.431 "seek_hole": false, 00:11:01.431 "seek_data": false, 00:11:01.431 "copy": false, 00:11:01.431 "nvme_iov_md": false 00:11:01.431 }, 00:11:01.431 "memory_domains": [ 00:11:01.431 { 00:11:01.431 "dma_device_id": "system", 00:11:01.431 "dma_device_type": 1 00:11:01.431 }, 00:11:01.431 { 00:11:01.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.431 "dma_device_type": 2 00:11:01.431 }, 00:11:01.431 { 00:11:01.431 "dma_device_id": "system", 00:11:01.431 "dma_device_type": 1 00:11:01.431 }, 00:11:01.431 { 00:11:01.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.431 "dma_device_type": 2 00:11:01.431 }, 00:11:01.431 { 00:11:01.431 "dma_device_id": "system", 00:11:01.431 "dma_device_type": 1 00:11:01.431 }, 00:11:01.431 { 00:11:01.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.431 "dma_device_type": 2 00:11:01.431 }, 00:11:01.431 { 00:11:01.431 "dma_device_id": "system", 00:11:01.431 "dma_device_type": 1 00:11:01.431 }, 00:11:01.431 { 00:11:01.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.431 "dma_device_type": 2 00:11:01.431 } 00:11:01.431 ], 00:11:01.431 "driver_specific": { 00:11:01.431 "raid": { 00:11:01.431 "uuid": "50d8d4e8-16db-4314-9381-8939f0862d59", 00:11:01.431 "strip_size_kb": 0, 00:11:01.431 "state": "online", 00:11:01.431 "raid_level": "raid1", 00:11:01.431 "superblock": true, 00:11:01.431 "num_base_bdevs": 4, 00:11:01.431 "num_base_bdevs_discovered": 4, 00:11:01.431 "num_base_bdevs_operational": 4, 00:11:01.431 "base_bdevs_list": [ 00:11:01.431 { 00:11:01.431 "name": "NewBaseBdev", 00:11:01.431 "uuid": "884ceed1-3fad-4083-957e-27acfbf2cbb1", 00:11:01.431 "is_configured": true, 00:11:01.431 "data_offset": 2048, 00:11:01.431 "data_size": 63488 00:11:01.431 }, 00:11:01.431 { 00:11:01.431 "name": "BaseBdev2", 00:11:01.432 "uuid": "b6f6f636-fed6-4e61-acf3-4ed636804f4e", 00:11:01.432 "is_configured": true, 00:11:01.432 "data_offset": 2048, 00:11:01.432 "data_size": 63488 00:11:01.432 }, 00:11:01.432 { 00:11:01.432 "name": "BaseBdev3", 00:11:01.432 "uuid": "fd59a735-4022-4427-b996-2eec19ea0a10", 00:11:01.432 "is_configured": true, 00:11:01.432 "data_offset": 2048, 00:11:01.432 "data_size": 63488 00:11:01.432 }, 00:11:01.432 { 00:11:01.432 "name": "BaseBdev4", 00:11:01.432 "uuid": "7ac77dc7-4267-4417-99da-8c78c9e46b46", 00:11:01.432 "is_configured": true, 00:11:01.432 "data_offset": 2048, 00:11:01.432 "data_size": 63488 00:11:01.432 } 00:11:01.432 ] 00:11:01.432 } 00:11:01.432 } 00:11:01.432 }' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:01.432 BaseBdev2 00:11:01.432 BaseBdev3 00:11:01.432 BaseBdev4' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.432 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.692 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.693 [2024-10-09 01:31:00.404405] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.693 [2024-10-09 01:31:00.404437] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.693 [2024-10-09 01:31:00.404529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.693 [2024-10-09 01:31:00.404834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.693 [2024-10-09 01:31:00.404852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 85765 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85765 ']' 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 85765 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85765 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:01.693 killing process with pid 85765 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85765' 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 85765 00:11:01.693 [2024-10-09 01:31:00.450162] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.693 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 85765 00:11:01.693 [2024-10-09 01:31:00.523388] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.263 01:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:02.263 00:11:02.263 real 0m9.995s 00:11:02.263 user 0m16.816s 00:11:02.263 sys 0m2.105s 00:11:02.263 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.263 01:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.263 ************************************ 00:11:02.263 END TEST raid_state_function_test_sb 00:11:02.263 ************************************ 00:11:02.263 01:31:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:02.263 01:31:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:02.263 01:31:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.264 01:31:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.264 ************************************ 00:11:02.264 START TEST raid_superblock_test 00:11:02.264 ************************************ 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=86424 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 86424 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 86424 ']' 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.264 01:31:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.264 [2024-10-09 01:31:01.053640] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:02.264 [2024-10-09 01:31:01.053785] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86424 ] 00:11:02.523 [2024-10-09 01:31:01.186458] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:02.523 [2024-10-09 01:31:01.215894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.523 [2024-10-09 01:31:01.284135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.523 [2024-10-09 01:31:01.359471] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.523 [2024-10-09 01:31:01.359526] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.091 malloc1 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.091 [2024-10-09 01:31:01.902355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:03.091 [2024-10-09 01:31:01.902432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.091 [2024-10-09 01:31:01.902458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:03.091 [2024-10-09 01:31:01.902471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.091 [2024-10-09 01:31:01.904875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.091 [2024-10-09 01:31:01.904911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:03.091 pt1 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.091 malloc2 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.091 [2024-10-09 01:31:01.947623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:03.091 [2024-10-09 01:31:01.947689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.091 [2024-10-09 01:31:01.947715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:03.091 [2024-10-09 01:31:01.947728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.091 [2024-10-09 01:31:01.950822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.091 [2024-10-09 01:31:01.950855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:03.091 pt2 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.091 malloc3 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.091 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.091 [2024-10-09 01:31:01.982230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:03.091 [2024-10-09 01:31:01.982295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.091 [2024-10-09 01:31:01.982318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:03.091 [2024-10-09 01:31:01.982327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.350 [2024-10-09 01:31:01.984789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.350 [2024-10-09 01:31:01.984822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:03.350 pt3 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.350 01:31:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.350 malloc4 00:11:03.350 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.350 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.351 [2024-10-09 01:31:02.017293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:03.351 [2024-10-09 01:31:02.017341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.351 [2024-10-09 01:31:02.017364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:03.351 [2024-10-09 01:31:02.017373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.351 [2024-10-09 01:31:02.019756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.351 [2024-10-09 01:31:02.019786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:03.351 pt4 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.351 [2024-10-09 01:31:02.029328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:03.351 [2024-10-09 01:31:02.031454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:03.351 [2024-10-09 01:31:02.031538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:03.351 [2024-10-09 01:31:02.031581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:03.351 [2024-10-09 01:31:02.031739] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:03.351 [2024-10-09 01:31:02.031755] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:03.351 [2024-10-09 01:31:02.032047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:03.351 [2024-10-09 01:31:02.032215] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:03.351 [2024-10-09 01:31:02.032236] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:03.351 [2024-10-09 01:31:02.032361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.351 "name": "raid_bdev1", 00:11:03.351 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:03.351 "strip_size_kb": 0, 00:11:03.351 "state": "online", 00:11:03.351 "raid_level": "raid1", 00:11:03.351 "superblock": true, 00:11:03.351 "num_base_bdevs": 4, 00:11:03.351 "num_base_bdevs_discovered": 4, 00:11:03.351 "num_base_bdevs_operational": 4, 00:11:03.351 "base_bdevs_list": [ 00:11:03.351 { 00:11:03.351 "name": "pt1", 00:11:03.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.351 "is_configured": true, 00:11:03.351 "data_offset": 2048, 00:11:03.351 "data_size": 63488 00:11:03.351 }, 00:11:03.351 { 00:11:03.351 "name": "pt2", 00:11:03.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.351 "is_configured": true, 00:11:03.351 "data_offset": 2048, 00:11:03.351 "data_size": 63488 00:11:03.351 }, 00:11:03.351 { 00:11:03.351 "name": "pt3", 00:11:03.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.351 "is_configured": true, 00:11:03.351 "data_offset": 2048, 00:11:03.351 "data_size": 63488 00:11:03.351 }, 00:11:03.351 { 00:11:03.351 "name": "pt4", 00:11:03.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.351 "is_configured": true, 00:11:03.351 "data_offset": 2048, 00:11:03.351 "data_size": 63488 00:11:03.351 } 00:11:03.351 ] 00:11:03.351 }' 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.351 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.921 [2024-10-09 01:31:02.517784] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.921 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.921 "name": "raid_bdev1", 00:11:03.921 "aliases": [ 00:11:03.921 "befc63f3-b867-486c-b7b2-dd53b92cb538" 00:11:03.921 ], 00:11:03.921 "product_name": "Raid Volume", 00:11:03.921 "block_size": 512, 00:11:03.921 "num_blocks": 63488, 00:11:03.921 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:03.921 "assigned_rate_limits": { 00:11:03.921 "rw_ios_per_sec": 0, 00:11:03.921 "rw_mbytes_per_sec": 0, 00:11:03.921 "r_mbytes_per_sec": 0, 00:11:03.921 "w_mbytes_per_sec": 0 00:11:03.921 }, 00:11:03.921 "claimed": false, 00:11:03.921 "zoned": false, 00:11:03.921 "supported_io_types": { 00:11:03.921 "read": true, 00:11:03.921 "write": true, 00:11:03.921 "unmap": false, 00:11:03.921 "flush": false, 00:11:03.921 "reset": true, 00:11:03.921 "nvme_admin": false, 00:11:03.921 "nvme_io": false, 00:11:03.921 "nvme_io_md": false, 00:11:03.921 "write_zeroes": true, 00:11:03.921 "zcopy": false, 00:11:03.921 "get_zone_info": false, 00:11:03.921 "zone_management": false, 00:11:03.921 "zone_append": false, 00:11:03.921 "compare": false, 00:11:03.921 "compare_and_write": false, 00:11:03.921 "abort": false, 00:11:03.921 "seek_hole": false, 00:11:03.921 "seek_data": false, 00:11:03.921 "copy": false, 00:11:03.921 "nvme_iov_md": false 00:11:03.921 }, 00:11:03.921 "memory_domains": [ 00:11:03.921 { 00:11:03.921 "dma_device_id": "system", 00:11:03.921 "dma_device_type": 1 00:11:03.921 }, 00:11:03.921 { 00:11:03.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.921 "dma_device_type": 2 00:11:03.921 }, 00:11:03.921 { 00:11:03.921 "dma_device_id": "system", 00:11:03.921 "dma_device_type": 1 00:11:03.921 }, 00:11:03.921 { 00:11:03.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.921 "dma_device_type": 2 00:11:03.921 }, 00:11:03.921 { 00:11:03.921 "dma_device_id": "system", 00:11:03.921 "dma_device_type": 1 00:11:03.921 }, 00:11:03.921 { 00:11:03.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.921 "dma_device_type": 2 00:11:03.921 }, 00:11:03.921 { 00:11:03.921 "dma_device_id": "system", 00:11:03.921 "dma_device_type": 1 00:11:03.921 }, 00:11:03.921 { 00:11:03.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.921 "dma_device_type": 2 00:11:03.921 } 00:11:03.921 ], 00:11:03.921 "driver_specific": { 00:11:03.921 "raid": { 00:11:03.922 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:03.922 "strip_size_kb": 0, 00:11:03.922 "state": "online", 00:11:03.922 "raid_level": "raid1", 00:11:03.922 "superblock": true, 00:11:03.922 "num_base_bdevs": 4, 00:11:03.922 "num_base_bdevs_discovered": 4, 00:11:03.922 "num_base_bdevs_operational": 4, 00:11:03.922 "base_bdevs_list": [ 00:11:03.922 { 00:11:03.922 "name": "pt1", 00:11:03.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.922 "is_configured": true, 00:11:03.922 "data_offset": 2048, 00:11:03.922 "data_size": 63488 00:11:03.922 }, 00:11:03.922 { 00:11:03.922 "name": "pt2", 00:11:03.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.922 "is_configured": true, 00:11:03.922 "data_offset": 2048, 00:11:03.922 "data_size": 63488 00:11:03.922 }, 00:11:03.922 { 00:11:03.922 "name": "pt3", 00:11:03.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.922 "is_configured": true, 00:11:03.922 "data_offset": 2048, 00:11:03.922 "data_size": 63488 00:11:03.922 }, 00:11:03.922 { 00:11:03.922 "name": "pt4", 00:11:03.922 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.922 "is_configured": true, 00:11:03.922 "data_offset": 2048, 00:11:03.922 "data_size": 63488 00:11:03.922 } 00:11:03.922 ] 00:11:03.922 } 00:11:03.922 } 00:11:03.922 }' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.922 pt2 00:11:03.922 pt3 00:11:03.922 pt4' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.922 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:03.922 [2024-10-09 01:31:02.809780] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=befc63f3-b867-486c-b7b2-dd53b92cb538 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z befc63f3-b867-486c-b7b2-dd53b92cb538 ']' 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 [2024-10-09 01:31:02.857514] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.182 [2024-10-09 01:31:02.857554] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.182 [2024-10-09 01:31:02.857628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.182 [2024-10-09 01:31:02.857717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.182 [2024-10-09 01:31:02.857730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.182 01:31:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 [2024-10-09 01:31:03.017582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:04.182 [2024-10-09 01:31:03.019729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:04.182 [2024-10-09 01:31:03.019778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:04.182 [2024-10-09 01:31:03.019809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:04.182 [2024-10-09 01:31:03.019855] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:04.182 [2024-10-09 01:31:03.019907] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:04.182 [2024-10-09 01:31:03.019925] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:04.182 [2024-10-09 01:31:03.019942] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:04.182 [2024-10-09 01:31:03.019955] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.182 [2024-10-09 01:31:03.019972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:04.182 request: 00:11:04.182 { 00:11:04.182 "name": "raid_bdev1", 00:11:04.182 "raid_level": "raid1", 00:11:04.182 "base_bdevs": [ 00:11:04.182 "malloc1", 00:11:04.182 "malloc2", 00:11:04.182 "malloc3", 00:11:04.182 "malloc4" 00:11:04.182 ], 00:11:04.182 "superblock": false, 00:11:04.182 "method": "bdev_raid_create", 00:11:04.182 "req_id": 1 00:11:04.182 } 00:11:04.182 Got JSON-RPC error response 00:11:04.182 response: 00:11:04.182 { 00:11:04.182 "code": -17, 00:11:04.182 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:04.182 } 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.182 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:04.183 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.183 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.183 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.183 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:04.183 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.442 [2024-10-09 01:31:03.081576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.442 [2024-10-09 01:31:03.081626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.442 [2024-10-09 01:31:03.081642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:04.442 [2024-10-09 01:31:03.081653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.442 [2024-10-09 01:31:03.084046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.442 [2024-10-09 01:31:03.084082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.442 [2024-10-09 01:31:03.084164] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:04.442 [2024-10-09 01:31:03.084212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.442 pt1 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.442 "name": "raid_bdev1", 00:11:04.442 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:04.442 "strip_size_kb": 0, 00:11:04.442 "state": "configuring", 00:11:04.442 "raid_level": "raid1", 00:11:04.442 "superblock": true, 00:11:04.442 "num_base_bdevs": 4, 00:11:04.442 "num_base_bdevs_discovered": 1, 00:11:04.442 "num_base_bdevs_operational": 4, 00:11:04.442 "base_bdevs_list": [ 00:11:04.442 { 00:11:04.442 "name": "pt1", 00:11:04.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.442 "is_configured": true, 00:11:04.442 "data_offset": 2048, 00:11:04.442 "data_size": 63488 00:11:04.442 }, 00:11:04.442 { 00:11:04.442 "name": null, 00:11:04.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.442 "is_configured": false, 00:11:04.442 "data_offset": 2048, 00:11:04.442 "data_size": 63488 00:11:04.442 }, 00:11:04.442 { 00:11:04.442 "name": null, 00:11:04.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.442 "is_configured": false, 00:11:04.442 "data_offset": 2048, 00:11:04.442 "data_size": 63488 00:11:04.442 }, 00:11:04.442 { 00:11:04.442 "name": null, 00:11:04.442 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.442 "is_configured": false, 00:11:04.442 "data_offset": 2048, 00:11:04.442 "data_size": 63488 00:11:04.442 } 00:11:04.442 ] 00:11:04.442 }' 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.442 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.740 [2024-10-09 01:31:03.513669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.740 [2024-10-09 01:31:03.513722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.740 [2024-10-09 01:31:03.513739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:04.740 [2024-10-09 01:31:03.513750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.740 [2024-10-09 01:31:03.514154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.740 [2024-10-09 01:31:03.514182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.740 [2024-10-09 01:31:03.514249] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.740 [2024-10-09 01:31:03.514282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.740 pt2 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.740 [2024-10-09 01:31:03.525704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.740 "name": "raid_bdev1", 00:11:04.740 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:04.740 "strip_size_kb": 0, 00:11:04.740 "state": "configuring", 00:11:04.740 "raid_level": "raid1", 00:11:04.740 "superblock": true, 00:11:04.740 "num_base_bdevs": 4, 00:11:04.740 "num_base_bdevs_discovered": 1, 00:11:04.740 "num_base_bdevs_operational": 4, 00:11:04.740 "base_bdevs_list": [ 00:11:04.740 { 00:11:04.740 "name": "pt1", 00:11:04.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.740 "is_configured": true, 00:11:04.740 "data_offset": 2048, 00:11:04.740 "data_size": 63488 00:11:04.740 }, 00:11:04.740 { 00:11:04.740 "name": null, 00:11:04.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.740 "is_configured": false, 00:11:04.740 "data_offset": 0, 00:11:04.740 "data_size": 63488 00:11:04.740 }, 00:11:04.740 { 00:11:04.740 "name": null, 00:11:04.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.740 "is_configured": false, 00:11:04.740 "data_offset": 2048, 00:11:04.740 "data_size": 63488 00:11:04.740 }, 00:11:04.740 { 00:11:04.740 "name": null, 00:11:04.740 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.740 "is_configured": false, 00:11:04.740 "data_offset": 2048, 00:11:04.740 "data_size": 63488 00:11:04.740 } 00:11:04.740 ] 00:11:04.740 }' 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.740 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 [2024-10-09 01:31:03.905830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.310 [2024-10-09 01:31:03.905888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.310 [2024-10-09 01:31:03.905908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:05.310 [2024-10-09 01:31:03.905917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.310 [2024-10-09 01:31:03.906342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.310 [2024-10-09 01:31:03.906366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.310 [2024-10-09 01:31:03.906441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:05.310 [2024-10-09 01:31:03.906463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.310 pt2 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 [2024-10-09 01:31:03.913804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:05.310 [2024-10-09 01:31:03.913858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.310 [2024-10-09 01:31:03.913878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:05.310 [2024-10-09 01:31:03.913886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.310 [2024-10-09 01:31:03.914238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.310 [2024-10-09 01:31:03.914260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:05.310 [2024-10-09 01:31:03.914316] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:05.310 [2024-10-09 01:31:03.914334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:05.310 pt3 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 [2024-10-09 01:31:03.921783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:05.310 [2024-10-09 01:31:03.921825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.310 [2024-10-09 01:31:03.921841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:05.310 [2024-10-09 01:31:03.921849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.310 [2024-10-09 01:31:03.922174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.310 [2024-10-09 01:31:03.922197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:05.310 [2024-10-09 01:31:03.922252] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:05.310 [2024-10-09 01:31:03.922269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:05.310 [2024-10-09 01:31:03.922377] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:05.310 [2024-10-09 01:31:03.922393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:05.310 [2024-10-09 01:31:03.922682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:05.310 [2024-10-09 01:31:03.922815] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:05.310 [2024-10-09 01:31:03.922835] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:05.310 [2024-10-09 01:31:03.922937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.310 pt4 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.310 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.310 "name": "raid_bdev1", 00:11:05.310 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:05.310 "strip_size_kb": 0, 00:11:05.311 "state": "online", 00:11:05.311 "raid_level": "raid1", 00:11:05.311 "superblock": true, 00:11:05.311 "num_base_bdevs": 4, 00:11:05.311 "num_base_bdevs_discovered": 4, 00:11:05.311 "num_base_bdevs_operational": 4, 00:11:05.311 "base_bdevs_list": [ 00:11:05.311 { 00:11:05.311 "name": "pt1", 00:11:05.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.311 "is_configured": true, 00:11:05.311 "data_offset": 2048, 00:11:05.311 "data_size": 63488 00:11:05.311 }, 00:11:05.311 { 00:11:05.311 "name": "pt2", 00:11:05.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.311 "is_configured": true, 00:11:05.311 "data_offset": 2048, 00:11:05.311 "data_size": 63488 00:11:05.311 }, 00:11:05.311 { 00:11:05.311 "name": "pt3", 00:11:05.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.311 "is_configured": true, 00:11:05.311 "data_offset": 2048, 00:11:05.311 "data_size": 63488 00:11:05.311 }, 00:11:05.311 { 00:11:05.311 "name": "pt4", 00:11:05.311 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.311 "is_configured": true, 00:11:05.311 "data_offset": 2048, 00:11:05.311 "data_size": 63488 00:11:05.311 } 00:11:05.311 ] 00:11:05.311 }' 00:11:05.311 01:31:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.311 01:31:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.571 [2024-10-09 01:31:04.306262] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.571 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.571 "name": "raid_bdev1", 00:11:05.571 "aliases": [ 00:11:05.571 "befc63f3-b867-486c-b7b2-dd53b92cb538" 00:11:05.571 ], 00:11:05.571 "product_name": "Raid Volume", 00:11:05.571 "block_size": 512, 00:11:05.571 "num_blocks": 63488, 00:11:05.571 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:05.571 "assigned_rate_limits": { 00:11:05.571 "rw_ios_per_sec": 0, 00:11:05.571 "rw_mbytes_per_sec": 0, 00:11:05.571 "r_mbytes_per_sec": 0, 00:11:05.571 "w_mbytes_per_sec": 0 00:11:05.571 }, 00:11:05.571 "claimed": false, 00:11:05.571 "zoned": false, 00:11:05.571 "supported_io_types": { 00:11:05.571 "read": true, 00:11:05.571 "write": true, 00:11:05.571 "unmap": false, 00:11:05.571 "flush": false, 00:11:05.571 "reset": true, 00:11:05.571 "nvme_admin": false, 00:11:05.571 "nvme_io": false, 00:11:05.571 "nvme_io_md": false, 00:11:05.571 "write_zeroes": true, 00:11:05.571 "zcopy": false, 00:11:05.571 "get_zone_info": false, 00:11:05.571 "zone_management": false, 00:11:05.571 "zone_append": false, 00:11:05.571 "compare": false, 00:11:05.571 "compare_and_write": false, 00:11:05.571 "abort": false, 00:11:05.571 "seek_hole": false, 00:11:05.571 "seek_data": false, 00:11:05.571 "copy": false, 00:11:05.571 "nvme_iov_md": false 00:11:05.571 }, 00:11:05.571 "memory_domains": [ 00:11:05.571 { 00:11:05.571 "dma_device_id": "system", 00:11:05.571 "dma_device_type": 1 00:11:05.571 }, 00:11:05.571 { 00:11:05.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.571 "dma_device_type": 2 00:11:05.571 }, 00:11:05.571 { 00:11:05.571 "dma_device_id": "system", 00:11:05.571 "dma_device_type": 1 00:11:05.571 }, 00:11:05.571 { 00:11:05.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.571 "dma_device_type": 2 00:11:05.571 }, 00:11:05.571 { 00:11:05.571 "dma_device_id": "system", 00:11:05.571 "dma_device_type": 1 00:11:05.571 }, 00:11:05.571 { 00:11:05.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.571 "dma_device_type": 2 00:11:05.571 }, 00:11:05.571 { 00:11:05.571 "dma_device_id": "system", 00:11:05.571 "dma_device_type": 1 00:11:05.571 }, 00:11:05.571 { 00:11:05.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.571 "dma_device_type": 2 00:11:05.571 } 00:11:05.571 ], 00:11:05.571 "driver_specific": { 00:11:05.571 "raid": { 00:11:05.571 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:05.571 "strip_size_kb": 0, 00:11:05.571 "state": "online", 00:11:05.571 "raid_level": "raid1", 00:11:05.571 "superblock": true, 00:11:05.571 "num_base_bdevs": 4, 00:11:05.571 "num_base_bdevs_discovered": 4, 00:11:05.571 "num_base_bdevs_operational": 4, 00:11:05.571 "base_bdevs_list": [ 00:11:05.571 { 00:11:05.571 "name": "pt1", 00:11:05.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.571 "is_configured": true, 00:11:05.571 "data_offset": 2048, 00:11:05.572 "data_size": 63488 00:11:05.572 }, 00:11:05.572 { 00:11:05.572 "name": "pt2", 00:11:05.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.572 "is_configured": true, 00:11:05.572 "data_offset": 2048, 00:11:05.572 "data_size": 63488 00:11:05.572 }, 00:11:05.572 { 00:11:05.572 "name": "pt3", 00:11:05.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.572 "is_configured": true, 00:11:05.572 "data_offset": 2048, 00:11:05.572 "data_size": 63488 00:11:05.572 }, 00:11:05.572 { 00:11:05.572 "name": "pt4", 00:11:05.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.572 "is_configured": true, 00:11:05.572 "data_offset": 2048, 00:11:05.572 "data_size": 63488 00:11:05.572 } 00:11:05.572 ] 00:11:05.572 } 00:11:05.572 } 00:11:05.572 }' 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:05.572 pt2 00:11:05.572 pt3 00:11:05.572 pt4' 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.572 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:05.832 [2024-10-09 01:31:04.618260] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' befc63f3-b867-486c-b7b2-dd53b92cb538 '!=' befc63f3-b867-486c-b7b2-dd53b92cb538 ']' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.832 [2024-10-09 01:31:04.666077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.832 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.832 "name": "raid_bdev1", 00:11:05.832 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:05.832 "strip_size_kb": 0, 00:11:05.832 "state": "online", 00:11:05.832 "raid_level": "raid1", 00:11:05.832 "superblock": true, 00:11:05.832 "num_base_bdevs": 4, 00:11:05.832 "num_base_bdevs_discovered": 3, 00:11:05.832 "num_base_bdevs_operational": 3, 00:11:05.832 "base_bdevs_list": [ 00:11:05.832 { 00:11:05.832 "name": null, 00:11:05.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.832 "is_configured": false, 00:11:05.832 "data_offset": 0, 00:11:05.832 "data_size": 63488 00:11:05.832 }, 00:11:05.832 { 00:11:05.832 "name": "pt2", 00:11:05.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.832 "is_configured": true, 00:11:05.832 "data_offset": 2048, 00:11:05.832 "data_size": 63488 00:11:05.832 }, 00:11:05.832 { 00:11:05.832 "name": "pt3", 00:11:05.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.832 "is_configured": true, 00:11:05.833 "data_offset": 2048, 00:11:05.833 "data_size": 63488 00:11:05.833 }, 00:11:05.833 { 00:11:05.833 "name": "pt4", 00:11:05.833 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.833 "is_configured": true, 00:11:05.833 "data_offset": 2048, 00:11:05.833 "data_size": 63488 00:11:05.833 } 00:11:05.833 ] 00:11:05.833 }' 00:11:05.833 01:31:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.833 01:31:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.402 [2024-10-09 01:31:05.086152] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.402 [2024-10-09 01:31:05.086183] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.402 [2024-10-09 01:31:05.086281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.402 [2024-10-09 01:31:05.086355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.402 [2024-10-09 01:31:05.086364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.402 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.403 [2024-10-09 01:31:05.146154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.403 [2024-10-09 01:31:05.146210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.403 [2024-10-09 01:31:05.146231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:06.403 [2024-10-09 01:31:05.146240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.403 [2024-10-09 01:31:05.148717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.403 [2024-10-09 01:31:05.148752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.403 [2024-10-09 01:31:05.148825] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:06.403 [2024-10-09 01:31:05.148858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.403 pt2 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.403 "name": "raid_bdev1", 00:11:06.403 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:06.403 "strip_size_kb": 0, 00:11:06.403 "state": "configuring", 00:11:06.403 "raid_level": "raid1", 00:11:06.403 "superblock": true, 00:11:06.403 "num_base_bdevs": 4, 00:11:06.403 "num_base_bdevs_discovered": 1, 00:11:06.403 "num_base_bdevs_operational": 3, 00:11:06.403 "base_bdevs_list": [ 00:11:06.403 { 00:11:06.403 "name": null, 00:11:06.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.403 "is_configured": false, 00:11:06.403 "data_offset": 2048, 00:11:06.403 "data_size": 63488 00:11:06.403 }, 00:11:06.403 { 00:11:06.403 "name": "pt2", 00:11:06.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.403 "is_configured": true, 00:11:06.403 "data_offset": 2048, 00:11:06.403 "data_size": 63488 00:11:06.403 }, 00:11:06.403 { 00:11:06.403 "name": null, 00:11:06.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.403 "is_configured": false, 00:11:06.403 "data_offset": 2048, 00:11:06.403 "data_size": 63488 00:11:06.403 }, 00:11:06.403 { 00:11:06.403 "name": null, 00:11:06.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.403 "is_configured": false, 00:11:06.403 "data_offset": 2048, 00:11:06.403 "data_size": 63488 00:11:06.403 } 00:11:06.403 ] 00:11:06.403 }' 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.403 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.663 [2024-10-09 01:31:05.522304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.663 [2024-10-09 01:31:05.522362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.663 [2024-10-09 01:31:05.522403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:06.663 [2024-10-09 01:31:05.522412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.663 [2024-10-09 01:31:05.522872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.663 [2024-10-09 01:31:05.522897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.663 [2024-10-09 01:31:05.522981] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:06.663 [2024-10-09 01:31:05.523007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.663 pt3 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.663 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.923 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.923 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.923 "name": "raid_bdev1", 00:11:06.923 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:06.923 "strip_size_kb": 0, 00:11:06.923 "state": "configuring", 00:11:06.923 "raid_level": "raid1", 00:11:06.923 "superblock": true, 00:11:06.923 "num_base_bdevs": 4, 00:11:06.923 "num_base_bdevs_discovered": 2, 00:11:06.923 "num_base_bdevs_operational": 3, 00:11:06.923 "base_bdevs_list": [ 00:11:06.923 { 00:11:06.923 "name": null, 00:11:06.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.923 "is_configured": false, 00:11:06.923 "data_offset": 2048, 00:11:06.923 "data_size": 63488 00:11:06.923 }, 00:11:06.923 { 00:11:06.923 "name": "pt2", 00:11:06.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.923 "is_configured": true, 00:11:06.923 "data_offset": 2048, 00:11:06.923 "data_size": 63488 00:11:06.923 }, 00:11:06.923 { 00:11:06.923 "name": "pt3", 00:11:06.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.923 "is_configured": true, 00:11:06.923 "data_offset": 2048, 00:11:06.923 "data_size": 63488 00:11:06.923 }, 00:11:06.923 { 00:11:06.923 "name": null, 00:11:06.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:06.923 "is_configured": false, 00:11:06.923 "data_offset": 2048, 00:11:06.923 "data_size": 63488 00:11:06.923 } 00:11:06.923 ] 00:11:06.923 }' 00:11:06.923 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.923 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.188 [2024-10-09 01:31:05.982402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:07.188 [2024-10-09 01:31:05.982465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.188 [2024-10-09 01:31:05.982489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:07.188 [2024-10-09 01:31:05.982498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.188 [2024-10-09 01:31:05.982917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.188 [2024-10-09 01:31:05.982943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:07.188 [2024-10-09 01:31:05.983019] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:07.188 [2024-10-09 01:31:05.983050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:07.188 [2024-10-09 01:31:05.983165] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:07.188 [2024-10-09 01:31:05.983181] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.188 [2024-10-09 01:31:05.983443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:07.188 [2024-10-09 01:31:05.983622] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:07.188 [2024-10-09 01:31:05.983643] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:07.188 [2024-10-09 01:31:05.983757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.188 pt4 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.188 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.189 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.189 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.189 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.189 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.189 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.189 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.189 01:31:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.189 01:31:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.189 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.189 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.189 "name": "raid_bdev1", 00:11:07.189 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:07.189 "strip_size_kb": 0, 00:11:07.189 "state": "online", 00:11:07.189 "raid_level": "raid1", 00:11:07.189 "superblock": true, 00:11:07.189 "num_base_bdevs": 4, 00:11:07.189 "num_base_bdevs_discovered": 3, 00:11:07.189 "num_base_bdevs_operational": 3, 00:11:07.189 "base_bdevs_list": [ 00:11:07.189 { 00:11:07.189 "name": null, 00:11:07.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.189 "is_configured": false, 00:11:07.189 "data_offset": 2048, 00:11:07.189 "data_size": 63488 00:11:07.189 }, 00:11:07.189 { 00:11:07.189 "name": "pt2", 00:11:07.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.189 "is_configured": true, 00:11:07.189 "data_offset": 2048, 00:11:07.189 "data_size": 63488 00:11:07.189 }, 00:11:07.189 { 00:11:07.189 "name": "pt3", 00:11:07.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.189 "is_configured": true, 00:11:07.189 "data_offset": 2048, 00:11:07.189 "data_size": 63488 00:11:07.189 }, 00:11:07.189 { 00:11:07.189 "name": "pt4", 00:11:07.189 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.189 "is_configured": true, 00:11:07.189 "data_offset": 2048, 00:11:07.189 "data_size": 63488 00:11:07.189 } 00:11:07.189 ] 00:11:07.189 }' 00:11:07.189 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.189 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.757 [2024-10-09 01:31:06.398498] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.757 [2024-10-09 01:31:06.398549] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.757 [2024-10-09 01:31:06.398654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.757 [2024-10-09 01:31:06.398735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.757 [2024-10-09 01:31:06.398748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.757 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.757 [2024-10-09 01:31:06.454507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.757 [2024-10-09 01:31:06.454590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.757 [2024-10-09 01:31:06.454611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:07.757 [2024-10-09 01:31:06.454622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.757 [2024-10-09 01:31:06.456839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.758 [2024-10-09 01:31:06.456876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.758 [2024-10-09 01:31:06.456944] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:07.758 [2024-10-09 01:31:06.456983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.758 [2024-10-09 01:31:06.457077] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:07.758 [2024-10-09 01:31:06.457093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.758 [2024-10-09 01:31:06.457115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:07.758 [2024-10-09 01:31:06.457157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:07.758 [2024-10-09 01:31:06.457254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:07.758 pt1 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.758 "name": "raid_bdev1", 00:11:07.758 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:07.758 "strip_size_kb": 0, 00:11:07.758 "state": "configuring", 00:11:07.758 "raid_level": "raid1", 00:11:07.758 "superblock": true, 00:11:07.758 "num_base_bdevs": 4, 00:11:07.758 "num_base_bdevs_discovered": 2, 00:11:07.758 "num_base_bdevs_operational": 3, 00:11:07.758 "base_bdevs_list": [ 00:11:07.758 { 00:11:07.758 "name": null, 00:11:07.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.758 "is_configured": false, 00:11:07.758 "data_offset": 2048, 00:11:07.758 "data_size": 63488 00:11:07.758 }, 00:11:07.758 { 00:11:07.758 "name": "pt2", 00:11:07.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.758 "is_configured": true, 00:11:07.758 "data_offset": 2048, 00:11:07.758 "data_size": 63488 00:11:07.758 }, 00:11:07.758 { 00:11:07.758 "name": "pt3", 00:11:07.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.758 "is_configured": true, 00:11:07.758 "data_offset": 2048, 00:11:07.758 "data_size": 63488 00:11:07.758 }, 00:11:07.758 { 00:11:07.758 "name": null, 00:11:07.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:07.758 "is_configured": false, 00:11:07.758 "data_offset": 2048, 00:11:07.758 "data_size": 63488 00:11:07.758 } 00:11:07.758 ] 00:11:07.758 }' 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.758 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.327 [2024-10-09 01:31:06.990697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:08.327 [2024-10-09 01:31:06.990751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.327 [2024-10-09 01:31:06.990772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:08.327 [2024-10-09 01:31:06.990782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.327 [2024-10-09 01:31:06.991189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.327 [2024-10-09 01:31:06.991212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:08.327 [2024-10-09 01:31:06.991285] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:08.327 [2024-10-09 01:31:06.991306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:08.327 [2024-10-09 01:31:06.991414] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:08.327 [2024-10-09 01:31:06.991430] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:08.327 [2024-10-09 01:31:06.991696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:08.327 [2024-10-09 01:31:06.991823] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:08.327 [2024-10-09 01:31:06.991842] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:08.327 [2024-10-09 01:31:06.991954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.327 pt4 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.327 01:31:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.327 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.327 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.327 "name": "raid_bdev1", 00:11:08.327 "uuid": "befc63f3-b867-486c-b7b2-dd53b92cb538", 00:11:08.327 "strip_size_kb": 0, 00:11:08.327 "state": "online", 00:11:08.327 "raid_level": "raid1", 00:11:08.327 "superblock": true, 00:11:08.327 "num_base_bdevs": 4, 00:11:08.327 "num_base_bdevs_discovered": 3, 00:11:08.327 "num_base_bdevs_operational": 3, 00:11:08.327 "base_bdevs_list": [ 00:11:08.327 { 00:11:08.327 "name": null, 00:11:08.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.327 "is_configured": false, 00:11:08.327 "data_offset": 2048, 00:11:08.327 "data_size": 63488 00:11:08.327 }, 00:11:08.327 { 00:11:08.327 "name": "pt2", 00:11:08.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.327 "is_configured": true, 00:11:08.327 "data_offset": 2048, 00:11:08.327 "data_size": 63488 00:11:08.327 }, 00:11:08.327 { 00:11:08.327 "name": "pt3", 00:11:08.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.327 "is_configured": true, 00:11:08.327 "data_offset": 2048, 00:11:08.327 "data_size": 63488 00:11:08.327 }, 00:11:08.327 { 00:11:08.327 "name": "pt4", 00:11:08.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:08.327 "is_configured": true, 00:11:08.327 "data_offset": 2048, 00:11:08.327 "data_size": 63488 00:11:08.327 } 00:11:08.327 ] 00:11:08.327 }' 00:11:08.327 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.327 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.587 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:08.587 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:08.587 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.587 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.587 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:08.847 [2024-10-09 01:31:07.499117] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' befc63f3-b867-486c-b7b2-dd53b92cb538 '!=' befc63f3-b867-486c-b7b2-dd53b92cb538 ']' 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 86424 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 86424 ']' 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 86424 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86424 00:11:08.847 killing process with pid 86424 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86424' 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 86424 00:11:08.847 [2024-10-09 01:31:07.570938] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.847 01:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 86424 00:11:08.847 [2024-10-09 01:31:07.571029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.847 [2024-10-09 01:31:07.571110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.847 [2024-10-09 01:31:07.571127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:08.847 [2024-10-09 01:31:07.647679] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.418 01:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:09.418 00:11:09.418 real 0m7.052s 00:11:09.418 user 0m11.581s 00:11:09.418 sys 0m1.572s 00:11:09.418 01:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:09.418 01:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.418 ************************************ 00:11:09.418 END TEST raid_superblock_test 00:11:09.418 ************************************ 00:11:09.418 01:31:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:09.418 01:31:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:09.418 01:31:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:09.418 01:31:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.418 ************************************ 00:11:09.418 START TEST raid_read_error_test 00:11:09.418 ************************************ 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FoGyPMPSO8 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86896 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86896 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 86896 ']' 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.418 01:31:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.418 [2024-10-09 01:31:08.197886] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:09.418 [2024-10-09 01:31:08.198030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86896 ] 00:11:09.678 [2024-10-09 01:31:08.334644] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:09.678 [2024-10-09 01:31:08.363848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.678 [2024-10-09 01:31:08.432183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.678 [2024-10-09 01:31:08.508388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.678 [2024-10-09 01:31:08.508445] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.248 BaseBdev1_malloc 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.248 true 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.248 [2024-10-09 01:31:09.039962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:10.248 [2024-10-09 01:31:09.040104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.248 [2024-10-09 01:31:09.040127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:10.248 [2024-10-09 01:31:09.040151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.248 [2024-10-09 01:31:09.042713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.248 [2024-10-09 01:31:09.042756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.248 BaseBdev1 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.248 BaseBdev2_malloc 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.248 true 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.248 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.248 [2024-10-09 01:31:09.085022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.248 [2024-10-09 01:31:09.085083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.248 [2024-10-09 01:31:09.085103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:10.249 [2024-10-09 01:31:09.085117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.249 [2024-10-09 01:31:09.087604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.249 [2024-10-09 01:31:09.087643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.249 BaseBdev2 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.249 BaseBdev3_malloc 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.249 true 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.249 [2024-10-09 01:31:09.123587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:10.249 [2024-10-09 01:31:09.123637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.249 [2024-10-09 01:31:09.123653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:10.249 [2024-10-09 01:31:09.123664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.249 [2024-10-09 01:31:09.126043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.249 [2024-10-09 01:31:09.126133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:10.249 BaseBdev3 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.249 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.508 BaseBdev4_malloc 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.508 true 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.508 [2024-10-09 01:31:09.158230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:10.508 [2024-10-09 01:31:09.158282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.508 [2024-10-09 01:31:09.158299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:10.508 [2024-10-09 01:31:09.158311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.508 [2024-10-09 01:31:09.160688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.508 [2024-10-09 01:31:09.160783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:10.508 BaseBdev4 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.508 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.508 [2024-10-09 01:31:09.166295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.508 [2024-10-09 01:31:09.168446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.508 [2024-10-09 01:31:09.168620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.508 [2024-10-09 01:31:09.168689] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:10.508 [2024-10-09 01:31:09.168895] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:10.508 [2024-10-09 01:31:09.168911] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:10.508 [2024-10-09 01:31:09.169180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:10.509 [2024-10-09 01:31:09.169318] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:10.509 [2024-10-09 01:31:09.169334] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:10.509 [2024-10-09 01:31:09.169467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.509 "name": "raid_bdev1", 00:11:10.509 "uuid": "9bed5e6f-14f2-4111-8cf2-047ea3159eb5", 00:11:10.509 "strip_size_kb": 0, 00:11:10.509 "state": "online", 00:11:10.509 "raid_level": "raid1", 00:11:10.509 "superblock": true, 00:11:10.509 "num_base_bdevs": 4, 00:11:10.509 "num_base_bdevs_discovered": 4, 00:11:10.509 "num_base_bdevs_operational": 4, 00:11:10.509 "base_bdevs_list": [ 00:11:10.509 { 00:11:10.509 "name": "BaseBdev1", 00:11:10.509 "uuid": "28fc77b6-4a9e-5161-a7cd-d04f4addc27d", 00:11:10.509 "is_configured": true, 00:11:10.509 "data_offset": 2048, 00:11:10.509 "data_size": 63488 00:11:10.509 }, 00:11:10.509 { 00:11:10.509 "name": "BaseBdev2", 00:11:10.509 "uuid": "f54d61a2-3d04-5cb7-951f-807a696ce4ed", 00:11:10.509 "is_configured": true, 00:11:10.509 "data_offset": 2048, 00:11:10.509 "data_size": 63488 00:11:10.509 }, 00:11:10.509 { 00:11:10.509 "name": "BaseBdev3", 00:11:10.509 "uuid": "30a13442-98d7-553b-8d03-0cee8fec234d", 00:11:10.509 "is_configured": true, 00:11:10.509 "data_offset": 2048, 00:11:10.509 "data_size": 63488 00:11:10.509 }, 00:11:10.509 { 00:11:10.509 "name": "BaseBdev4", 00:11:10.509 "uuid": "227ce686-e63a-5a14-ac4c-c6279c8dca66", 00:11:10.509 "is_configured": true, 00:11:10.509 "data_offset": 2048, 00:11:10.509 "data_size": 63488 00:11:10.509 } 00:11:10.509 ] 00:11:10.509 }' 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.509 01:31:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.769 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.769 01:31:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:10.769 [2024-10-09 01:31:09.642882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.708 01:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.968 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.968 "name": "raid_bdev1", 00:11:11.968 "uuid": "9bed5e6f-14f2-4111-8cf2-047ea3159eb5", 00:11:11.968 "strip_size_kb": 0, 00:11:11.968 "state": "online", 00:11:11.968 "raid_level": "raid1", 00:11:11.968 "superblock": true, 00:11:11.968 "num_base_bdevs": 4, 00:11:11.968 "num_base_bdevs_discovered": 4, 00:11:11.968 "num_base_bdevs_operational": 4, 00:11:11.968 "base_bdevs_list": [ 00:11:11.968 { 00:11:11.968 "name": "BaseBdev1", 00:11:11.968 "uuid": "28fc77b6-4a9e-5161-a7cd-d04f4addc27d", 00:11:11.968 "is_configured": true, 00:11:11.968 "data_offset": 2048, 00:11:11.968 "data_size": 63488 00:11:11.968 }, 00:11:11.968 { 00:11:11.968 "name": "BaseBdev2", 00:11:11.968 "uuid": "f54d61a2-3d04-5cb7-951f-807a696ce4ed", 00:11:11.968 "is_configured": true, 00:11:11.968 "data_offset": 2048, 00:11:11.968 "data_size": 63488 00:11:11.968 }, 00:11:11.968 { 00:11:11.968 "name": "BaseBdev3", 00:11:11.968 "uuid": "30a13442-98d7-553b-8d03-0cee8fec234d", 00:11:11.968 "is_configured": true, 00:11:11.968 "data_offset": 2048, 00:11:11.968 "data_size": 63488 00:11:11.968 }, 00:11:11.968 { 00:11:11.968 "name": "BaseBdev4", 00:11:11.968 "uuid": "227ce686-e63a-5a14-ac4c-c6279c8dca66", 00:11:11.968 "is_configured": true, 00:11:11.968 "data_offset": 2048, 00:11:11.968 "data_size": 63488 00:11:11.968 } 00:11:11.968 ] 00:11:11.968 }' 00:11:11.968 01:31:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.968 01:31:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.230 [2024-10-09 01:31:11.058339] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.230 [2024-10-09 01:31:11.058457] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.230 [2024-10-09 01:31:11.061125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.230 [2024-10-09 01:31:11.061185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.230 [2024-10-09 01:31:11.061318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.230 [2024-10-09 01:31:11.061333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:12.230 { 00:11:12.230 "results": [ 00:11:12.230 { 00:11:12.230 "job": "raid_bdev1", 00:11:12.230 "core_mask": "0x1", 00:11:12.230 "workload": "randrw", 00:11:12.230 "percentage": 50, 00:11:12.230 "status": "finished", 00:11:12.230 "queue_depth": 1, 00:11:12.230 "io_size": 131072, 00:11:12.230 "runtime": 1.413511, 00:11:12.230 "iops": 8775.311971396048, 00:11:12.230 "mibps": 1096.913996424506, 00:11:12.230 "io_failed": 0, 00:11:12.230 "io_timeout": 0, 00:11:12.230 "avg_latency_us": 111.52481747522533, 00:11:12.230 "min_latency_us": 21.86699206833435, 00:11:12.230 "max_latency_us": 1363.7862808332607 00:11:12.230 } 00:11:12.230 ], 00:11:12.230 "core_count": 1 00:11:12.230 } 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86896 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 86896 ']' 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 86896 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86896 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86896' 00:11:12.230 killing process with pid 86896 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 86896 00:11:12.230 [2024-10-09 01:31:11.102852] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.230 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 86896 00:11:12.489 [2024-10-09 01:31:11.168750] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FoGyPMPSO8 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:12.749 00:11:12.749 real 0m3.463s 00:11:12.749 user 0m4.180s 00:11:12.749 sys 0m0.667s 00:11:12.749 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.749 ************************************ 00:11:12.749 END TEST raid_read_error_test 00:11:12.750 ************************************ 00:11:12.750 01:31:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.750 01:31:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:12.750 01:31:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:12.750 01:31:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.750 01:31:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.750 ************************************ 00:11:12.750 START TEST raid_write_error_test 00:11:12.750 ************************************ 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:12.750 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.U4zwJsk2fg 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87025 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87025 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 87025 ']' 00:11:13.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.010 01:31:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.010 [2024-10-09 01:31:11.741072] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:13.010 [2024-10-09 01:31:11.741212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87025 ] 00:11:13.010 [2024-10-09 01:31:11.878138] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:13.269 [2024-10-09 01:31:11.908311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.269 [2024-10-09 01:31:11.977877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.269 [2024-10-09 01:31:12.055202] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.269 [2024-10-09 01:31:12.055249] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.839 BaseBdev1_malloc 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.839 true 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.839 [2024-10-09 01:31:12.598601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:13.839 [2024-10-09 01:31:12.598665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.839 [2024-10-09 01:31:12.598689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:13.839 [2024-10-09 01:31:12.598716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.839 [2024-10-09 01:31:12.601078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.839 [2024-10-09 01:31:12.601128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:13.839 BaseBdev1 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.839 BaseBdev2_malloc 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.839 true 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.839 [2024-10-09 01:31:12.667111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:13.839 [2024-10-09 01:31:12.667190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.839 [2024-10-09 01:31:12.667217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:13.839 [2024-10-09 01:31:12.667235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.839 [2024-10-09 01:31:12.670766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.839 [2024-10-09 01:31:12.670808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:13.839 BaseBdev2 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.839 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.839 BaseBdev3_malloc 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.840 true 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.840 [2024-10-09 01:31:12.713532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:13.840 [2024-10-09 01:31:12.713651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.840 [2024-10-09 01:31:12.713673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:13.840 [2024-10-09 01:31:12.713685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.840 [2024-10-09 01:31:12.716033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.840 [2024-10-09 01:31:12.716069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:13.840 BaseBdev3 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.840 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.099 BaseBdev4_malloc 00:11:14.099 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.099 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:14.099 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.099 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.099 true 00:11:14.099 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.099 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:14.099 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.099 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.099 [2024-10-09 01:31:12.760040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:14.099 [2024-10-09 01:31:12.760140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.099 [2024-10-09 01:31:12.760160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:14.099 [2024-10-09 01:31:12.760171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.099 [2024-10-09 01:31:12.762500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.099 [2024-10-09 01:31:12.762552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:14.100 BaseBdev4 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.100 [2024-10-09 01:31:12.772111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.100 [2024-10-09 01:31:12.774191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.100 [2024-10-09 01:31:12.774267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.100 [2024-10-09 01:31:12.774333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.100 [2024-10-09 01:31:12.774537] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:14.100 [2024-10-09 01:31:12.774551] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.100 [2024-10-09 01:31:12.774814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:14.100 [2024-10-09 01:31:12.774975] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:14.100 [2024-10-09 01:31:12.774986] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:14.100 [2024-10-09 01:31:12.775119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.100 "name": "raid_bdev1", 00:11:14.100 "uuid": "c2547f2d-03a4-4116-aa6d-3e2731c78c32", 00:11:14.100 "strip_size_kb": 0, 00:11:14.100 "state": "online", 00:11:14.100 "raid_level": "raid1", 00:11:14.100 "superblock": true, 00:11:14.100 "num_base_bdevs": 4, 00:11:14.100 "num_base_bdevs_discovered": 4, 00:11:14.100 "num_base_bdevs_operational": 4, 00:11:14.100 "base_bdevs_list": [ 00:11:14.100 { 00:11:14.100 "name": "BaseBdev1", 00:11:14.100 "uuid": "de2f7537-b2d9-522c-9d71-a2cf630e203c", 00:11:14.100 "is_configured": true, 00:11:14.100 "data_offset": 2048, 00:11:14.100 "data_size": 63488 00:11:14.100 }, 00:11:14.100 { 00:11:14.100 "name": "BaseBdev2", 00:11:14.100 "uuid": "7dd92eb2-1103-5346-94d4-c0a54c7fb9ec", 00:11:14.100 "is_configured": true, 00:11:14.100 "data_offset": 2048, 00:11:14.100 "data_size": 63488 00:11:14.100 }, 00:11:14.100 { 00:11:14.100 "name": "BaseBdev3", 00:11:14.100 "uuid": "c061e3db-1f6a-5b66-aa8e-be6ee6dc7de7", 00:11:14.100 "is_configured": true, 00:11:14.100 "data_offset": 2048, 00:11:14.100 "data_size": 63488 00:11:14.100 }, 00:11:14.100 { 00:11:14.100 "name": "BaseBdev4", 00:11:14.100 "uuid": "5cc3cf5d-7d0e-5e53-a232-93389d266462", 00:11:14.100 "is_configured": true, 00:11:14.100 "data_offset": 2048, 00:11:14.100 "data_size": 63488 00:11:14.100 } 00:11:14.100 ] 00:11:14.100 }' 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.100 01:31:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.360 01:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:14.360 01:31:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:14.620 [2024-10-09 01:31:13.252767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.559 [2024-10-09 01:31:14.191590] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:15.559 [2024-10-09 01:31:14.191746] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:15.559 [2024-10-09 01:31:14.192038] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.559 "name": "raid_bdev1", 00:11:15.559 "uuid": "c2547f2d-03a4-4116-aa6d-3e2731c78c32", 00:11:15.559 "strip_size_kb": 0, 00:11:15.559 "state": "online", 00:11:15.559 "raid_level": "raid1", 00:11:15.559 "superblock": true, 00:11:15.559 "num_base_bdevs": 4, 00:11:15.559 "num_base_bdevs_discovered": 3, 00:11:15.559 "num_base_bdevs_operational": 3, 00:11:15.559 "base_bdevs_list": [ 00:11:15.559 { 00:11:15.559 "name": null, 00:11:15.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.559 "is_configured": false, 00:11:15.559 "data_offset": 0, 00:11:15.559 "data_size": 63488 00:11:15.559 }, 00:11:15.559 { 00:11:15.559 "name": "BaseBdev2", 00:11:15.559 "uuid": "7dd92eb2-1103-5346-94d4-c0a54c7fb9ec", 00:11:15.559 "is_configured": true, 00:11:15.559 "data_offset": 2048, 00:11:15.559 "data_size": 63488 00:11:15.559 }, 00:11:15.559 { 00:11:15.559 "name": "BaseBdev3", 00:11:15.559 "uuid": "c061e3db-1f6a-5b66-aa8e-be6ee6dc7de7", 00:11:15.559 "is_configured": true, 00:11:15.559 "data_offset": 2048, 00:11:15.559 "data_size": 63488 00:11:15.559 }, 00:11:15.559 { 00:11:15.559 "name": "BaseBdev4", 00:11:15.559 "uuid": "5cc3cf5d-7d0e-5e53-a232-93389d266462", 00:11:15.559 "is_configured": true, 00:11:15.559 "data_offset": 2048, 00:11:15.559 "data_size": 63488 00:11:15.559 } 00:11:15.559 ] 00:11:15.559 }' 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.559 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.819 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.819 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.819 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.819 [2024-10-09 01:31:14.625913] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.819 [2024-10-09 01:31:14.626037] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.819 [2024-10-09 01:31:14.628674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.819 [2024-10-09 01:31:14.628725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.819 [2024-10-09 01:31:14.628837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.819 [2024-10-09 01:31:14.628848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:15.820 { 00:11:15.820 "results": [ 00:11:15.820 { 00:11:15.820 "job": "raid_bdev1", 00:11:15.820 "core_mask": "0x1", 00:11:15.820 "workload": "randrw", 00:11:15.820 "percentage": 50, 00:11:15.820 "status": "finished", 00:11:15.820 "queue_depth": 1, 00:11:15.820 "io_size": 131072, 00:11:15.820 "runtime": 1.37097, 00:11:15.820 "iops": 9485.984376025734, 00:11:15.820 "mibps": 1185.7480470032167, 00:11:15.820 "io_failed": 0, 00:11:15.820 "io_timeout": 0, 00:11:15.820 "avg_latency_us": 102.9269849178351, 00:11:15.820 "min_latency_us": 21.64385949620849, 00:11:15.820 "max_latency_us": 1506.5911269938115 00:11:15.820 } 00:11:15.820 ], 00:11:15.820 "core_count": 1 00:11:15.820 } 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87025 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 87025 ']' 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 87025 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87025 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87025' 00:11:15.820 killing process with pid 87025 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 87025 00:11:15.820 [2024-10-09 01:31:14.675700] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.820 01:31:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 87025 00:11:16.080 [2024-10-09 01:31:14.739331] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.U4zwJsk2fg 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:16.339 00:11:16.339 real 0m3.493s 00:11:16.339 user 0m4.159s 00:11:16.339 sys 0m0.665s 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.339 01:31:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.339 ************************************ 00:11:16.339 END TEST raid_write_error_test 00:11:16.339 ************************************ 00:11:16.339 01:31:15 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:16.339 01:31:15 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:16.339 01:31:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:16.339 01:31:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:16.339 01:31:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.339 01:31:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.339 ************************************ 00:11:16.339 START TEST raid_rebuild_test 00:11:16.339 ************************************ 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.339 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87157 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87157 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 87157 ']' 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.340 01:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.599 [2024-10-09 01:31:15.298259] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:16.600 [2024-10-09 01:31:15.298430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87157 ] 00:11:16.600 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:16.600 Zero copy mechanism will not be used. 00:11:16.600 [2024-10-09 01:31:15.435480] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:16.600 [2024-10-09 01:31:15.463294] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.859 [2024-10-09 01:31:15.534241] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.859 [2024-10-09 01:31:15.609662] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.859 [2024-10-09 01:31:15.609711] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 BaseBdev1_malloc 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 [2024-10-09 01:31:16.148902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:17.445 [2024-10-09 01:31:16.148987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.445 [2024-10-09 01:31:16.149017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:17.445 [2024-10-09 01:31:16.149038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.445 [2024-10-09 01:31:16.151488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.445 [2024-10-09 01:31:16.151604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.445 BaseBdev1 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 BaseBdev2_malloc 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 [2024-10-09 01:31:16.200664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:17.445 [2024-10-09 01:31:16.200895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.445 [2024-10-09 01:31:16.200952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.445 [2024-10-09 01:31:16.200982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.445 [2024-10-09 01:31:16.205908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.445 [2024-10-09 01:31:16.205964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:17.445 BaseBdev2 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 spare_malloc 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 spare_delay 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 [2024-10-09 01:31:16.248832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:17.445 [2024-10-09 01:31:16.248888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.445 [2024-10-09 01:31:16.248907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:17.445 [2024-10-09 01:31:16.248917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.445 [2024-10-09 01:31:16.251293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.445 [2024-10-09 01:31:16.251331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:17.445 spare 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.445 [2024-10-09 01:31:16.260884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.445 [2024-10-09 01:31:16.262956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.445 [2024-10-09 01:31:16.263039] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:17.445 [2024-10-09 01:31:16.263051] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:17.445 [2024-10-09 01:31:16.263301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:17.445 [2024-10-09 01:31:16.263441] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:17.445 [2024-10-09 01:31:16.263451] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:17.445 [2024-10-09 01:31:16.263609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.445 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.446 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.446 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.446 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.446 "name": "raid_bdev1", 00:11:17.446 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:17.446 "strip_size_kb": 0, 00:11:17.446 "state": "online", 00:11:17.446 "raid_level": "raid1", 00:11:17.446 "superblock": false, 00:11:17.446 "num_base_bdevs": 2, 00:11:17.446 "num_base_bdevs_discovered": 2, 00:11:17.446 "num_base_bdevs_operational": 2, 00:11:17.446 "base_bdevs_list": [ 00:11:17.446 { 00:11:17.446 "name": "BaseBdev1", 00:11:17.446 "uuid": "181b1193-19f0-539f-a540-d3e56b0d894e", 00:11:17.446 "is_configured": true, 00:11:17.446 "data_offset": 0, 00:11:17.446 "data_size": 65536 00:11:17.446 }, 00:11:17.446 { 00:11:17.446 "name": "BaseBdev2", 00:11:17.446 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:17.446 "is_configured": true, 00:11:17.446 "data_offset": 0, 00:11:17.446 "data_size": 65536 00:11:17.446 } 00:11:17.446 ] 00:11:17.446 }' 00:11:17.446 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.446 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.015 [2024-10-09 01:31:16.637278] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:18.015 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:18.275 [2024-10-09 01:31:16.921192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:18.275 /dev/nbd0 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.275 1+0 records in 00:11:18.275 1+0 records out 00:11:18.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401419 s, 10.2 MB/s 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:18.275 01:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:22.495 65536+0 records in 00:11:22.495 65536+0 records out 00:11:22.495 33554432 bytes (34 MB, 32 MiB) copied, 3.6178 s, 9.3 MB/s 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:22.495 [2024-10-09 01:31:20.815486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.495 [2024-10-09 01:31:20.867547] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.495 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.495 "name": "raid_bdev1", 00:11:22.495 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:22.495 "strip_size_kb": 0, 00:11:22.495 "state": "online", 00:11:22.495 "raid_level": "raid1", 00:11:22.495 "superblock": false, 00:11:22.495 "num_base_bdevs": 2, 00:11:22.495 "num_base_bdevs_discovered": 1, 00:11:22.495 "num_base_bdevs_operational": 1, 00:11:22.495 "base_bdevs_list": [ 00:11:22.495 { 00:11:22.495 "name": null, 00:11:22.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.496 "is_configured": false, 00:11:22.496 "data_offset": 0, 00:11:22.496 "data_size": 65536 00:11:22.496 }, 00:11:22.496 { 00:11:22.496 "name": "BaseBdev2", 00:11:22.496 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:22.496 "is_configured": true, 00:11:22.496 "data_offset": 0, 00:11:22.496 "data_size": 65536 00:11:22.496 } 00:11:22.496 ] 00:11:22.496 }' 00:11:22.496 01:31:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.496 01:31:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.496 01:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:22.496 01:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.496 01:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.496 [2024-10-09 01:31:21.283627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:22.496 [2024-10-09 01:31:21.288067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:22.496 01:31:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.496 01:31:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:22.496 [2024-10-09 01:31:21.289967] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.435 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.695 "name": "raid_bdev1", 00:11:23.695 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:23.695 "strip_size_kb": 0, 00:11:23.695 "state": "online", 00:11:23.695 "raid_level": "raid1", 00:11:23.695 "superblock": false, 00:11:23.695 "num_base_bdevs": 2, 00:11:23.695 "num_base_bdevs_discovered": 2, 00:11:23.695 "num_base_bdevs_operational": 2, 00:11:23.695 "process": { 00:11:23.695 "type": "rebuild", 00:11:23.695 "target": "spare", 00:11:23.695 "progress": { 00:11:23.695 "blocks": 20480, 00:11:23.695 "percent": 31 00:11:23.695 } 00:11:23.695 }, 00:11:23.695 "base_bdevs_list": [ 00:11:23.695 { 00:11:23.695 "name": "spare", 00:11:23.695 "uuid": "a24c2d65-3a2b-5bf0-84d4-f955dc9cfbcf", 00:11:23.695 "is_configured": true, 00:11:23.695 "data_offset": 0, 00:11:23.695 "data_size": 65536 00:11:23.695 }, 00:11:23.695 { 00:11:23.695 "name": "BaseBdev2", 00:11:23.695 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:23.695 "is_configured": true, 00:11:23.695 "data_offset": 0, 00:11:23.695 "data_size": 65536 00:11:23.695 } 00:11:23.695 ] 00:11:23.695 }' 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.695 [2024-10-09 01:31:22.425030] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.695 [2024-10-09 01:31:22.497036] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:23.695 [2024-10-09 01:31:22.497107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.695 [2024-10-09 01:31:22.497122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.695 [2024-10-09 01:31:22.497132] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.695 "name": "raid_bdev1", 00:11:23.695 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:23.695 "strip_size_kb": 0, 00:11:23.695 "state": "online", 00:11:23.695 "raid_level": "raid1", 00:11:23.695 "superblock": false, 00:11:23.695 "num_base_bdevs": 2, 00:11:23.695 "num_base_bdevs_discovered": 1, 00:11:23.695 "num_base_bdevs_operational": 1, 00:11:23.695 "base_bdevs_list": [ 00:11:23.695 { 00:11:23.695 "name": null, 00:11:23.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.695 "is_configured": false, 00:11:23.695 "data_offset": 0, 00:11:23.695 "data_size": 65536 00:11:23.695 }, 00:11:23.695 { 00:11:23.695 "name": "BaseBdev2", 00:11:23.695 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:23.695 "is_configured": true, 00:11:23.695 "data_offset": 0, 00:11:23.695 "data_size": 65536 00:11:23.695 } 00:11:23.695 ] 00:11:23.695 }' 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.695 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:24.265 "name": "raid_bdev1", 00:11:24.265 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:24.265 "strip_size_kb": 0, 00:11:24.265 "state": "online", 00:11:24.265 "raid_level": "raid1", 00:11:24.265 "superblock": false, 00:11:24.265 "num_base_bdevs": 2, 00:11:24.265 "num_base_bdevs_discovered": 1, 00:11:24.265 "num_base_bdevs_operational": 1, 00:11:24.265 "base_bdevs_list": [ 00:11:24.265 { 00:11:24.265 "name": null, 00:11:24.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.265 "is_configured": false, 00:11:24.265 "data_offset": 0, 00:11:24.265 "data_size": 65536 00:11:24.265 }, 00:11:24.265 { 00:11:24.265 "name": "BaseBdev2", 00:11:24.265 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:24.265 "is_configured": true, 00:11:24.265 "data_offset": 0, 00:11:24.265 "data_size": 65536 00:11:24.265 } 00:11:24.265 ] 00:11:24.265 }' 00:11:24.265 01:31:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:24.265 01:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:24.265 01:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:24.265 01:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:24.265 01:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:24.265 01:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.265 01:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.265 [2024-10-09 01:31:23.081891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:24.265 [2024-10-09 01:31:23.086766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:11:24.265 01:31:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.265 01:31:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:24.265 [2024-10-09 01:31:23.088718] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:25.204 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.204 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.204 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.204 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.204 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.464 "name": "raid_bdev1", 00:11:25.464 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:25.464 "strip_size_kb": 0, 00:11:25.464 "state": "online", 00:11:25.464 "raid_level": "raid1", 00:11:25.464 "superblock": false, 00:11:25.464 "num_base_bdevs": 2, 00:11:25.464 "num_base_bdevs_discovered": 2, 00:11:25.464 "num_base_bdevs_operational": 2, 00:11:25.464 "process": { 00:11:25.464 "type": "rebuild", 00:11:25.464 "target": "spare", 00:11:25.464 "progress": { 00:11:25.464 "blocks": 20480, 00:11:25.464 "percent": 31 00:11:25.464 } 00:11:25.464 }, 00:11:25.464 "base_bdevs_list": [ 00:11:25.464 { 00:11:25.464 "name": "spare", 00:11:25.464 "uuid": "a24c2d65-3a2b-5bf0-84d4-f955dc9cfbcf", 00:11:25.464 "is_configured": true, 00:11:25.464 "data_offset": 0, 00:11:25.464 "data_size": 65536 00:11:25.464 }, 00:11:25.464 { 00:11:25.464 "name": "BaseBdev2", 00:11:25.464 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:25.464 "is_configured": true, 00:11:25.464 "data_offset": 0, 00:11:25.464 "data_size": 65536 00:11:25.464 } 00:11:25.464 ] 00:11:25.464 }' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=300 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.464 "name": "raid_bdev1", 00:11:25.464 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:25.464 "strip_size_kb": 0, 00:11:25.464 "state": "online", 00:11:25.464 "raid_level": "raid1", 00:11:25.464 "superblock": false, 00:11:25.464 "num_base_bdevs": 2, 00:11:25.464 "num_base_bdevs_discovered": 2, 00:11:25.464 "num_base_bdevs_operational": 2, 00:11:25.464 "process": { 00:11:25.464 "type": "rebuild", 00:11:25.464 "target": "spare", 00:11:25.464 "progress": { 00:11:25.464 "blocks": 22528, 00:11:25.464 "percent": 34 00:11:25.464 } 00:11:25.464 }, 00:11:25.464 "base_bdevs_list": [ 00:11:25.464 { 00:11:25.464 "name": "spare", 00:11:25.464 "uuid": "a24c2d65-3a2b-5bf0-84d4-f955dc9cfbcf", 00:11:25.464 "is_configured": true, 00:11:25.464 "data_offset": 0, 00:11:25.464 "data_size": 65536 00:11:25.464 }, 00:11:25.464 { 00:11:25.464 "name": "BaseBdev2", 00:11:25.464 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:25.464 "is_configured": true, 00:11:25.464 "data_offset": 0, 00:11:25.464 "data_size": 65536 00:11:25.464 } 00:11:25.464 ] 00:11:25.464 }' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.464 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.724 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.724 01:31:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.663 "name": "raid_bdev1", 00:11:26.663 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:26.663 "strip_size_kb": 0, 00:11:26.663 "state": "online", 00:11:26.663 "raid_level": "raid1", 00:11:26.663 "superblock": false, 00:11:26.663 "num_base_bdevs": 2, 00:11:26.663 "num_base_bdevs_discovered": 2, 00:11:26.663 "num_base_bdevs_operational": 2, 00:11:26.663 "process": { 00:11:26.663 "type": "rebuild", 00:11:26.663 "target": "spare", 00:11:26.663 "progress": { 00:11:26.663 "blocks": 45056, 00:11:26.663 "percent": 68 00:11:26.663 } 00:11:26.663 }, 00:11:26.663 "base_bdevs_list": [ 00:11:26.663 { 00:11:26.663 "name": "spare", 00:11:26.663 "uuid": "a24c2d65-3a2b-5bf0-84d4-f955dc9cfbcf", 00:11:26.663 "is_configured": true, 00:11:26.663 "data_offset": 0, 00:11:26.663 "data_size": 65536 00:11:26.663 }, 00:11:26.663 { 00:11:26.663 "name": "BaseBdev2", 00:11:26.663 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:26.663 "is_configured": true, 00:11:26.663 "data_offset": 0, 00:11:26.663 "data_size": 65536 00:11:26.663 } 00:11:26.663 ] 00:11:26.663 }' 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:26.663 01:31:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:27.628 [2024-10-09 01:31:26.305477] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:27.628 [2024-10-09 01:31:26.305570] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:27.628 [2024-10-09 01:31:26.305614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.628 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.888 "name": "raid_bdev1", 00:11:27.888 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:27.888 "strip_size_kb": 0, 00:11:27.888 "state": "online", 00:11:27.888 "raid_level": "raid1", 00:11:27.888 "superblock": false, 00:11:27.888 "num_base_bdevs": 2, 00:11:27.888 "num_base_bdevs_discovered": 2, 00:11:27.888 "num_base_bdevs_operational": 2, 00:11:27.888 "base_bdevs_list": [ 00:11:27.888 { 00:11:27.888 "name": "spare", 00:11:27.888 "uuid": "a24c2d65-3a2b-5bf0-84d4-f955dc9cfbcf", 00:11:27.888 "is_configured": true, 00:11:27.888 "data_offset": 0, 00:11:27.888 "data_size": 65536 00:11:27.888 }, 00:11:27.888 { 00:11:27.888 "name": "BaseBdev2", 00:11:27.888 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:27.888 "is_configured": true, 00:11:27.888 "data_offset": 0, 00:11:27.888 "data_size": 65536 00:11:27.888 } 00:11:27.888 ] 00:11:27.888 }' 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.888 "name": "raid_bdev1", 00:11:27.888 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:27.888 "strip_size_kb": 0, 00:11:27.888 "state": "online", 00:11:27.888 "raid_level": "raid1", 00:11:27.888 "superblock": false, 00:11:27.888 "num_base_bdevs": 2, 00:11:27.888 "num_base_bdevs_discovered": 2, 00:11:27.888 "num_base_bdevs_operational": 2, 00:11:27.888 "base_bdevs_list": [ 00:11:27.888 { 00:11:27.888 "name": "spare", 00:11:27.888 "uuid": "a24c2d65-3a2b-5bf0-84d4-f955dc9cfbcf", 00:11:27.888 "is_configured": true, 00:11:27.888 "data_offset": 0, 00:11:27.888 "data_size": 65536 00:11:27.888 }, 00:11:27.888 { 00:11:27.888 "name": "BaseBdev2", 00:11:27.888 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:27.888 "is_configured": true, 00:11:27.888 "data_offset": 0, 00:11:27.888 "data_size": 65536 00:11:27.888 } 00:11:27.888 ] 00:11:27.888 }' 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.888 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.148 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.148 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.148 "name": "raid_bdev1", 00:11:28.148 "uuid": "218dca41-d334-4596-9be5-c24ed8627a24", 00:11:28.148 "strip_size_kb": 0, 00:11:28.148 "state": "online", 00:11:28.148 "raid_level": "raid1", 00:11:28.148 "superblock": false, 00:11:28.148 "num_base_bdevs": 2, 00:11:28.148 "num_base_bdevs_discovered": 2, 00:11:28.148 "num_base_bdevs_operational": 2, 00:11:28.148 "base_bdevs_list": [ 00:11:28.148 { 00:11:28.148 "name": "spare", 00:11:28.148 "uuid": "a24c2d65-3a2b-5bf0-84d4-f955dc9cfbcf", 00:11:28.148 "is_configured": true, 00:11:28.148 "data_offset": 0, 00:11:28.148 "data_size": 65536 00:11:28.148 }, 00:11:28.148 { 00:11:28.148 "name": "BaseBdev2", 00:11:28.148 "uuid": "de1e01b2-9a0f-5876-9524-669f2693fd55", 00:11:28.148 "is_configured": true, 00:11:28.148 "data_offset": 0, 00:11:28.148 "data_size": 65536 00:11:28.148 } 00:11:28.148 ] 00:11:28.148 }' 00:11:28.148 01:31:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.148 01:31:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.408 [2024-10-09 01:31:27.162219] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.408 [2024-10-09 01:31:27.162258] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.408 [2024-10-09 01:31:27.162350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.408 [2024-10-09 01:31:27.162417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.408 [2024-10-09 01:31:27.162430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.408 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:28.668 /dev/nbd0 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.668 1+0 records in 00:11:28.668 1+0 records out 00:11:28.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363634 s, 11.3 MB/s 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.668 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:28.928 /dev/nbd1 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.928 1+0 records in 00:11:28.928 1+0 records out 00:11:28.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258132 s, 15.9 MB/s 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:28.928 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:28.929 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:28.929 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.929 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.189 01:31:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87157 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 87157 ']' 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 87157 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87157 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87157' 00:11:29.449 killing process with pid 87157 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 87157 00:11:29.449 Received shutdown signal, test time was about 60.000000 seconds 00:11:29.449 00:11:29.449 Latency(us) 00:11:29.449 [2024-10-09T01:31:28.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.449 [2024-10-09T01:31:28.342Z] =================================================================================================================== 00:11:29.449 [2024-10-09T01:31:28.342Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:29.449 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 87157 00:11:29.449 [2024-10-09 01:31:28.244236] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.449 [2024-10-09 01:31:28.275244] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:29.709 ************************************ 00:11:29.709 END TEST raid_rebuild_test 00:11:29.709 ************************************ 00:11:29.709 00:11:29.709 real 0m13.320s 00:11:29.709 user 0m15.165s 00:11:29.709 sys 0m2.906s 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.709 01:31:28 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:29.709 01:31:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:29.709 01:31:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.709 01:31:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.709 ************************************ 00:11:29.709 START TEST raid_rebuild_test_sb 00:11:29.709 ************************************ 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.709 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=87558 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 87558 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 87558 ']' 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.969 01:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.969 [2024-10-09 01:31:28.674554] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:29.969 [2024-10-09 01:31:28.674737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:29.969 Zero copy mechanism will not be used. 00:11:29.969 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87558 ] 00:11:29.969 [2024-10-09 01:31:28.805358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:29.969 [2024-10-09 01:31:28.834449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.229 [2024-10-09 01:31:28.880030] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.229 [2024-10-09 01:31:28.921669] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.229 [2024-10-09 01:31:28.921709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.798 BaseBdev1_malloc 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.798 [2024-10-09 01:31:29.536398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:30.798 [2024-10-09 01:31:29.536478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.798 [2024-10-09 01:31:29.536512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:30.798 [2024-10-09 01:31:29.536547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.798 [2024-10-09 01:31:29.538616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.798 [2024-10-09 01:31:29.538652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.798 BaseBdev1 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.798 BaseBdev2_malloc 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.798 [2024-10-09 01:31:29.581357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:30.798 [2024-10-09 01:31:29.581469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.798 [2024-10-09 01:31:29.581508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:30.798 [2024-10-09 01:31:29.581571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.798 [2024-10-09 01:31:29.586286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.798 [2024-10-09 01:31:29.586367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.798 BaseBdev2 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.798 spare_malloc 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.798 spare_delay 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.798 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.798 [2024-10-09 01:31:29.624077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.799 [2024-10-09 01:31:29.624130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.799 [2024-10-09 01:31:29.624164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:30.799 [2024-10-09 01:31:29.624174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.799 [2024-10-09 01:31:29.626244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.799 [2024-10-09 01:31:29.626324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.799 spare 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.799 [2024-10-09 01:31:29.636128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.799 [2024-10-09 01:31:29.637945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.799 [2024-10-09 01:31:29.638088] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:30.799 [2024-10-09 01:31:29.638102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.799 [2024-10-09 01:31:29.638351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:30.799 [2024-10-09 01:31:29.638468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:30.799 [2024-10-09 01:31:29.638481] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:30.799 [2024-10-09 01:31:29.638599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.799 "name": "raid_bdev1", 00:11:30.799 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:30.799 "strip_size_kb": 0, 00:11:30.799 "state": "online", 00:11:30.799 "raid_level": "raid1", 00:11:30.799 "superblock": true, 00:11:30.799 "num_base_bdevs": 2, 00:11:30.799 "num_base_bdevs_discovered": 2, 00:11:30.799 "num_base_bdevs_operational": 2, 00:11:30.799 "base_bdevs_list": [ 00:11:30.799 { 00:11:30.799 "name": "BaseBdev1", 00:11:30.799 "uuid": "56fe8cd6-7940-59ae-aee3-96c19ea762c0", 00:11:30.799 "is_configured": true, 00:11:30.799 "data_offset": 2048, 00:11:30.799 "data_size": 63488 00:11:30.799 }, 00:11:30.799 { 00:11:30.799 "name": "BaseBdev2", 00:11:30.799 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:30.799 "is_configured": true, 00:11:30.799 "data_offset": 2048, 00:11:30.799 "data_size": 63488 00:11:30.799 } 00:11:30.799 ] 00:11:30.799 }' 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.799 01:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.368 [2024-10-09 01:31:30.064584] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.368 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:31.628 [2024-10-09 01:31:30.328368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:31.628 /dev/nbd0 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.628 1+0 records in 00:11:31.628 1+0 records out 00:11:31.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387692 s, 10.6 MB/s 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:31.628 01:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:35.823 63488+0 records in 00:11:35.823 63488+0 records out 00:11:35.823 32505856 bytes (33 MB, 31 MiB) copied, 3.72186 s, 8.7 MB/s 00:11:35.823 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:35.823 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:35.823 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:35.823 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:35.823 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:35.823 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:35.824 [2024-10-09 01:31:34.307493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.824 [2024-10-09 01:31:34.343610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.824 "name": "raid_bdev1", 00:11:35.824 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:35.824 "strip_size_kb": 0, 00:11:35.824 "state": "online", 00:11:35.824 "raid_level": "raid1", 00:11:35.824 "superblock": true, 00:11:35.824 "num_base_bdevs": 2, 00:11:35.824 "num_base_bdevs_discovered": 1, 00:11:35.824 "num_base_bdevs_operational": 1, 00:11:35.824 "base_bdevs_list": [ 00:11:35.824 { 00:11:35.824 "name": null, 00:11:35.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.824 "is_configured": false, 00:11:35.824 "data_offset": 0, 00:11:35.824 "data_size": 63488 00:11:35.824 }, 00:11:35.824 { 00:11:35.824 "name": "BaseBdev2", 00:11:35.824 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:35.824 "is_configured": true, 00:11:35.824 "data_offset": 2048, 00:11:35.824 "data_size": 63488 00:11:35.824 } 00:11:35.824 ] 00:11:35.824 }' 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.824 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.083 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:36.083 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.083 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.083 [2024-10-09 01:31:34.743776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:36.083 [2024-10-09 01:31:34.751229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:11:36.083 01:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.083 01:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:36.083 [2024-10-09 01:31:34.753486] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.019 "name": "raid_bdev1", 00:11:37.019 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:37.019 "strip_size_kb": 0, 00:11:37.019 "state": "online", 00:11:37.019 "raid_level": "raid1", 00:11:37.019 "superblock": true, 00:11:37.019 "num_base_bdevs": 2, 00:11:37.019 "num_base_bdevs_discovered": 2, 00:11:37.019 "num_base_bdevs_operational": 2, 00:11:37.019 "process": { 00:11:37.019 "type": "rebuild", 00:11:37.019 "target": "spare", 00:11:37.019 "progress": { 00:11:37.019 "blocks": 20480, 00:11:37.019 "percent": 32 00:11:37.019 } 00:11:37.019 }, 00:11:37.019 "base_bdevs_list": [ 00:11:37.019 { 00:11:37.019 "name": "spare", 00:11:37.019 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:37.019 "is_configured": true, 00:11:37.019 "data_offset": 2048, 00:11:37.019 "data_size": 63488 00:11:37.019 }, 00:11:37.019 { 00:11:37.019 "name": "BaseBdev2", 00:11:37.019 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:37.019 "is_configured": true, 00:11:37.019 "data_offset": 2048, 00:11:37.019 "data_size": 63488 00:11:37.019 } 00:11:37.019 ] 00:11:37.019 }' 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.019 01:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.279 [2024-10-09 01:31:35.915905] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.279 [2024-10-09 01:31:35.963801] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:37.279 [2024-10-09 01:31:35.963884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.279 [2024-10-09 01:31:35.963899] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:37.279 [2024-10-09 01:31:35.963909] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.279 01:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.279 01:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.279 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.279 "name": "raid_bdev1", 00:11:37.279 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:37.279 "strip_size_kb": 0, 00:11:37.279 "state": "online", 00:11:37.279 "raid_level": "raid1", 00:11:37.279 "superblock": true, 00:11:37.279 "num_base_bdevs": 2, 00:11:37.279 "num_base_bdevs_discovered": 1, 00:11:37.279 "num_base_bdevs_operational": 1, 00:11:37.279 "base_bdevs_list": [ 00:11:37.279 { 00:11:37.279 "name": null, 00:11:37.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.279 "is_configured": false, 00:11:37.279 "data_offset": 0, 00:11:37.279 "data_size": 63488 00:11:37.279 }, 00:11:37.279 { 00:11:37.279 "name": "BaseBdev2", 00:11:37.279 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:37.279 "is_configured": true, 00:11:37.279 "data_offset": 2048, 00:11:37.279 "data_size": 63488 00:11:37.279 } 00:11:37.279 ] 00:11:37.279 }' 00:11:37.279 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.279 01:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.848 "name": "raid_bdev1", 00:11:37.848 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:37.848 "strip_size_kb": 0, 00:11:37.848 "state": "online", 00:11:37.848 "raid_level": "raid1", 00:11:37.848 "superblock": true, 00:11:37.848 "num_base_bdevs": 2, 00:11:37.848 "num_base_bdevs_discovered": 1, 00:11:37.848 "num_base_bdevs_operational": 1, 00:11:37.848 "base_bdevs_list": [ 00:11:37.848 { 00:11:37.848 "name": null, 00:11:37.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.848 "is_configured": false, 00:11:37.848 "data_offset": 0, 00:11:37.848 "data_size": 63488 00:11:37.848 }, 00:11:37.848 { 00:11:37.848 "name": "BaseBdev2", 00:11:37.848 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:37.848 "is_configured": true, 00:11:37.848 "data_offset": 2048, 00:11:37.848 "data_size": 63488 00:11:37.848 } 00:11:37.848 ] 00:11:37.848 }' 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.848 [2024-10-09 01:31:36.563825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:37.848 [2024-10-09 01:31:36.570869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.848 01:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:37.848 [2024-10-09 01:31:36.573044] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.787 "name": "raid_bdev1", 00:11:38.787 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:38.787 "strip_size_kb": 0, 00:11:38.787 "state": "online", 00:11:38.787 "raid_level": "raid1", 00:11:38.787 "superblock": true, 00:11:38.787 "num_base_bdevs": 2, 00:11:38.787 "num_base_bdevs_discovered": 2, 00:11:38.787 "num_base_bdevs_operational": 2, 00:11:38.787 "process": { 00:11:38.787 "type": "rebuild", 00:11:38.787 "target": "spare", 00:11:38.787 "progress": { 00:11:38.787 "blocks": 20480, 00:11:38.787 "percent": 32 00:11:38.787 } 00:11:38.787 }, 00:11:38.787 "base_bdevs_list": [ 00:11:38.787 { 00:11:38.787 "name": "spare", 00:11:38.787 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:38.787 "is_configured": true, 00:11:38.787 "data_offset": 2048, 00:11:38.787 "data_size": 63488 00:11:38.787 }, 00:11:38.787 { 00:11:38.787 "name": "BaseBdev2", 00:11:38.787 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:38.787 "is_configured": true, 00:11:38.787 "data_offset": 2048, 00:11:38.787 "data_size": 63488 00:11:38.787 } 00:11:38.787 ] 00:11:38.787 }' 00:11:38.787 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:39.047 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=313 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.047 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.047 "name": "raid_bdev1", 00:11:39.047 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:39.047 "strip_size_kb": 0, 00:11:39.047 "state": "online", 00:11:39.047 "raid_level": "raid1", 00:11:39.047 "superblock": true, 00:11:39.047 "num_base_bdevs": 2, 00:11:39.047 "num_base_bdevs_discovered": 2, 00:11:39.047 "num_base_bdevs_operational": 2, 00:11:39.047 "process": { 00:11:39.047 "type": "rebuild", 00:11:39.047 "target": "spare", 00:11:39.047 "progress": { 00:11:39.047 "blocks": 22528, 00:11:39.047 "percent": 35 00:11:39.047 } 00:11:39.047 }, 00:11:39.047 "base_bdevs_list": [ 00:11:39.047 { 00:11:39.047 "name": "spare", 00:11:39.047 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:39.048 "is_configured": true, 00:11:39.048 "data_offset": 2048, 00:11:39.048 "data_size": 63488 00:11:39.048 }, 00:11:39.048 { 00:11:39.048 "name": "BaseBdev2", 00:11:39.048 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:39.048 "is_configured": true, 00:11:39.048 "data_offset": 2048, 00:11:39.048 "data_size": 63488 00:11:39.048 } 00:11:39.048 ] 00:11:39.048 }' 00:11:39.048 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.048 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.048 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.048 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.048 01:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.428 "name": "raid_bdev1", 00:11:40.428 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:40.428 "strip_size_kb": 0, 00:11:40.428 "state": "online", 00:11:40.428 "raid_level": "raid1", 00:11:40.428 "superblock": true, 00:11:40.428 "num_base_bdevs": 2, 00:11:40.428 "num_base_bdevs_discovered": 2, 00:11:40.428 "num_base_bdevs_operational": 2, 00:11:40.428 "process": { 00:11:40.428 "type": "rebuild", 00:11:40.428 "target": "spare", 00:11:40.428 "progress": { 00:11:40.428 "blocks": 47104, 00:11:40.428 "percent": 74 00:11:40.428 } 00:11:40.428 }, 00:11:40.428 "base_bdevs_list": [ 00:11:40.428 { 00:11:40.428 "name": "spare", 00:11:40.428 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:40.428 "is_configured": true, 00:11:40.428 "data_offset": 2048, 00:11:40.428 "data_size": 63488 00:11:40.428 }, 00:11:40.428 { 00:11:40.428 "name": "BaseBdev2", 00:11:40.428 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:40.428 "is_configured": true, 00:11:40.428 "data_offset": 2048, 00:11:40.428 "data_size": 63488 00:11:40.428 } 00:11:40.428 ] 00:11:40.428 }' 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.428 01:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.429 01:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.429 01:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:40.997 [2024-10-09 01:31:39.699490] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:40.997 [2024-10-09 01:31:39.699600] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:40.997 [2024-10-09 01:31:39.699734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.258 "name": "raid_bdev1", 00:11:41.258 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:41.258 "strip_size_kb": 0, 00:11:41.258 "state": "online", 00:11:41.258 "raid_level": "raid1", 00:11:41.258 "superblock": true, 00:11:41.258 "num_base_bdevs": 2, 00:11:41.258 "num_base_bdevs_discovered": 2, 00:11:41.258 "num_base_bdevs_operational": 2, 00:11:41.258 "base_bdevs_list": [ 00:11:41.258 { 00:11:41.258 "name": "spare", 00:11:41.258 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:41.258 "is_configured": true, 00:11:41.258 "data_offset": 2048, 00:11:41.258 "data_size": 63488 00:11:41.258 }, 00:11:41.258 { 00:11:41.258 "name": "BaseBdev2", 00:11:41.258 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:41.258 "is_configured": true, 00:11:41.258 "data_offset": 2048, 00:11:41.258 "data_size": 63488 00:11:41.258 } 00:11:41.258 ] 00:11:41.258 }' 00:11:41.258 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.518 "name": "raid_bdev1", 00:11:41.518 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:41.518 "strip_size_kb": 0, 00:11:41.518 "state": "online", 00:11:41.518 "raid_level": "raid1", 00:11:41.518 "superblock": true, 00:11:41.518 "num_base_bdevs": 2, 00:11:41.518 "num_base_bdevs_discovered": 2, 00:11:41.518 "num_base_bdevs_operational": 2, 00:11:41.518 "base_bdevs_list": [ 00:11:41.518 { 00:11:41.518 "name": "spare", 00:11:41.518 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:41.518 "is_configured": true, 00:11:41.518 "data_offset": 2048, 00:11:41.518 "data_size": 63488 00:11:41.518 }, 00:11:41.518 { 00:11:41.518 "name": "BaseBdev2", 00:11:41.518 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:41.518 "is_configured": true, 00:11:41.518 "data_offset": 2048, 00:11:41.518 "data_size": 63488 00:11:41.518 } 00:11:41.518 ] 00:11:41.518 }' 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.518 "name": "raid_bdev1", 00:11:41.518 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:41.518 "strip_size_kb": 0, 00:11:41.518 "state": "online", 00:11:41.518 "raid_level": "raid1", 00:11:41.518 "superblock": true, 00:11:41.518 "num_base_bdevs": 2, 00:11:41.518 "num_base_bdevs_discovered": 2, 00:11:41.518 "num_base_bdevs_operational": 2, 00:11:41.518 "base_bdevs_list": [ 00:11:41.518 { 00:11:41.518 "name": "spare", 00:11:41.518 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:41.518 "is_configured": true, 00:11:41.518 "data_offset": 2048, 00:11:41.518 "data_size": 63488 00:11:41.518 }, 00:11:41.518 { 00:11:41.518 "name": "BaseBdev2", 00:11:41.518 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:41.518 "is_configured": true, 00:11:41.518 "data_offset": 2048, 00:11:41.518 "data_size": 63488 00:11:41.518 } 00:11:41.518 ] 00:11:41.518 }' 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.518 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.088 [2024-10-09 01:31:40.719828] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.088 [2024-10-09 01:31:40.719869] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.088 [2024-10-09 01:31:40.719973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.088 [2024-10-09 01:31:40.720059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.088 [2024-10-09 01:31:40.720072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:42.088 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:42.088 /dev/nbd0 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.348 1+0 records in 00:11:42.348 1+0 records out 00:11:42.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003891 s, 10.5 MB/s 00:11:42.348 01:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:42.348 /dev/nbd1 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:42.348 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.613 1+0 records in 00:11:42.613 1+0 records out 00:11:42.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431396 s, 9.5 MB/s 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.613 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:42.880 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.139 [2024-10-09 01:31:41.790425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:43.139 [2024-10-09 01:31:41.790493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.139 [2024-10-09 01:31:41.790544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:43.139 [2024-10-09 01:31:41.790554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.139 [2024-10-09 01:31:41.793075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.139 [2024-10-09 01:31:41.793166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:43.139 [2024-10-09 01:31:41.793266] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:43.139 [2024-10-09 01:31:41.793318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:43.139 [2024-10-09 01:31:41.793444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.139 spare 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.139 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.139 [2024-10-09 01:31:41.893514] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:43.139 [2024-10-09 01:31:41.893601] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.140 [2024-10-09 01:31:41.893919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:11:43.140 [2024-10-09 01:31:41.894119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:43.140 [2024-10-09 01:31:41.894162] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:43.140 [2024-10-09 01:31:41.894347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.140 "name": "raid_bdev1", 00:11:43.140 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:43.140 "strip_size_kb": 0, 00:11:43.140 "state": "online", 00:11:43.140 "raid_level": "raid1", 00:11:43.140 "superblock": true, 00:11:43.140 "num_base_bdevs": 2, 00:11:43.140 "num_base_bdevs_discovered": 2, 00:11:43.140 "num_base_bdevs_operational": 2, 00:11:43.140 "base_bdevs_list": [ 00:11:43.140 { 00:11:43.140 "name": "spare", 00:11:43.140 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:43.140 "is_configured": true, 00:11:43.140 "data_offset": 2048, 00:11:43.140 "data_size": 63488 00:11:43.140 }, 00:11:43.140 { 00:11:43.140 "name": "BaseBdev2", 00:11:43.140 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:43.140 "is_configured": true, 00:11:43.140 "data_offset": 2048, 00:11:43.140 "data_size": 63488 00:11:43.140 } 00:11:43.140 ] 00:11:43.140 }' 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.140 01:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:43.709 "name": "raid_bdev1", 00:11:43.709 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:43.709 "strip_size_kb": 0, 00:11:43.709 "state": "online", 00:11:43.709 "raid_level": "raid1", 00:11:43.709 "superblock": true, 00:11:43.709 "num_base_bdevs": 2, 00:11:43.709 "num_base_bdevs_discovered": 2, 00:11:43.709 "num_base_bdevs_operational": 2, 00:11:43.709 "base_bdevs_list": [ 00:11:43.709 { 00:11:43.709 "name": "spare", 00:11:43.709 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:43.709 "is_configured": true, 00:11:43.709 "data_offset": 2048, 00:11:43.709 "data_size": 63488 00:11:43.709 }, 00:11:43.709 { 00:11:43.709 "name": "BaseBdev2", 00:11:43.709 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:43.709 "is_configured": true, 00:11:43.709 "data_offset": 2048, 00:11:43.709 "data_size": 63488 00:11:43.709 } 00:11:43.709 ] 00:11:43.709 }' 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 [2024-10-09 01:31:42.534684] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.709 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.709 "name": "raid_bdev1", 00:11:43.709 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:43.709 "strip_size_kb": 0, 00:11:43.710 "state": "online", 00:11:43.710 "raid_level": "raid1", 00:11:43.710 "superblock": true, 00:11:43.710 "num_base_bdevs": 2, 00:11:43.710 "num_base_bdevs_discovered": 1, 00:11:43.710 "num_base_bdevs_operational": 1, 00:11:43.710 "base_bdevs_list": [ 00:11:43.710 { 00:11:43.710 "name": null, 00:11:43.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.710 "is_configured": false, 00:11:43.710 "data_offset": 0, 00:11:43.710 "data_size": 63488 00:11:43.710 }, 00:11:43.710 { 00:11:43.710 "name": "BaseBdev2", 00:11:43.710 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:43.710 "is_configured": true, 00:11:43.710 "data_offset": 2048, 00:11:43.710 "data_size": 63488 00:11:43.710 } 00:11:43.710 ] 00:11:43.710 }' 00:11:43.710 01:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.710 01:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.279 01:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:44.279 01:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.279 01:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.279 [2024-10-09 01:31:43.018882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:44.279 [2024-10-09 01:31:43.019213] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:44.279 [2024-10-09 01:31:43.019287] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:44.279 [2024-10-09 01:31:43.019360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:44.279 [2024-10-09 01:31:43.026493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:11:44.279 01:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.279 01:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:44.279 [2024-10-09 01:31:43.028823] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:45.218 "name": "raid_bdev1", 00:11:45.218 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:45.218 "strip_size_kb": 0, 00:11:45.218 "state": "online", 00:11:45.218 "raid_level": "raid1", 00:11:45.218 "superblock": true, 00:11:45.218 "num_base_bdevs": 2, 00:11:45.218 "num_base_bdevs_discovered": 2, 00:11:45.218 "num_base_bdevs_operational": 2, 00:11:45.218 "process": { 00:11:45.218 "type": "rebuild", 00:11:45.218 "target": "spare", 00:11:45.218 "progress": { 00:11:45.218 "blocks": 20480, 00:11:45.218 "percent": 32 00:11:45.218 } 00:11:45.218 }, 00:11:45.218 "base_bdevs_list": [ 00:11:45.218 { 00:11:45.218 "name": "spare", 00:11:45.218 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:45.218 "is_configured": true, 00:11:45.218 "data_offset": 2048, 00:11:45.218 "data_size": 63488 00:11:45.218 }, 00:11:45.218 { 00:11:45.218 "name": "BaseBdev2", 00:11:45.218 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:45.218 "is_configured": true, 00:11:45.218 "data_offset": 2048, 00:11:45.218 "data_size": 63488 00:11:45.218 } 00:11:45.218 ] 00:11:45.218 }' 00:11:45.218 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.478 [2024-10-09 01:31:44.192423] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:45.478 [2024-10-09 01:31:44.238536] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:45.478 [2024-10-09 01:31:44.238609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.478 [2024-10-09 01:31:44.238640] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:45.478 [2024-10-09 01:31:44.238650] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.478 "name": "raid_bdev1", 00:11:45.478 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:45.478 "strip_size_kb": 0, 00:11:45.478 "state": "online", 00:11:45.478 "raid_level": "raid1", 00:11:45.478 "superblock": true, 00:11:45.478 "num_base_bdevs": 2, 00:11:45.478 "num_base_bdevs_discovered": 1, 00:11:45.478 "num_base_bdevs_operational": 1, 00:11:45.478 "base_bdevs_list": [ 00:11:45.478 { 00:11:45.478 "name": null, 00:11:45.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.478 "is_configured": false, 00:11:45.478 "data_offset": 0, 00:11:45.478 "data_size": 63488 00:11:45.478 }, 00:11:45.478 { 00:11:45.478 "name": "BaseBdev2", 00:11:45.478 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:45.478 "is_configured": true, 00:11:45.478 "data_offset": 2048, 00:11:45.478 "data_size": 63488 00:11:45.478 } 00:11:45.478 ] 00:11:45.478 }' 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.478 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.048 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:46.048 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.048 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.048 [2024-10-09 01:31:44.698341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:46.048 [2024-10-09 01:31:44.698425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.048 [2024-10-09 01:31:44.698448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:46.048 [2024-10-09 01:31:44.698459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.048 [2024-10-09 01:31:44.698994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.048 [2024-10-09 01:31:44.699024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:46.048 [2024-10-09 01:31:44.699117] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:46.048 [2024-10-09 01:31:44.699153] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:46.048 [2024-10-09 01:31:44.699163] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:46.048 [2024-10-09 01:31:44.699199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:46.048 [2024-10-09 01:31:44.706181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:11:46.048 spare 00:11:46.048 01:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.048 01:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:46.048 [2024-10-09 01:31:44.708327] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:46.988 "name": "raid_bdev1", 00:11:46.988 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:46.988 "strip_size_kb": 0, 00:11:46.988 "state": "online", 00:11:46.988 "raid_level": "raid1", 00:11:46.988 "superblock": true, 00:11:46.988 "num_base_bdevs": 2, 00:11:46.988 "num_base_bdevs_discovered": 2, 00:11:46.988 "num_base_bdevs_operational": 2, 00:11:46.988 "process": { 00:11:46.988 "type": "rebuild", 00:11:46.988 "target": "spare", 00:11:46.988 "progress": { 00:11:46.988 "blocks": 20480, 00:11:46.988 "percent": 32 00:11:46.988 } 00:11:46.988 }, 00:11:46.988 "base_bdevs_list": [ 00:11:46.988 { 00:11:46.988 "name": "spare", 00:11:46.988 "uuid": "2f46a811-8ce3-5cb5-9f1d-ca201568737c", 00:11:46.988 "is_configured": true, 00:11:46.988 "data_offset": 2048, 00:11:46.988 "data_size": 63488 00:11:46.988 }, 00:11:46.988 { 00:11:46.988 "name": "BaseBdev2", 00:11:46.988 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:46.988 "is_configured": true, 00:11:46.988 "data_offset": 2048, 00:11:46.988 "data_size": 63488 00:11:46.988 } 00:11:46.988 ] 00:11:46.988 }' 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.988 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.988 [2024-10-09 01:31:45.853401] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:47.247 [2024-10-09 01:31:45.918068] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:47.247 [2024-10-09 01:31:45.918139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.247 [2024-10-09 01:31:45.918157] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:47.247 [2024-10-09 01:31:45.918165] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.247 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.248 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.248 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.248 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.248 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.248 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.248 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.248 "name": "raid_bdev1", 00:11:47.248 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:47.248 "strip_size_kb": 0, 00:11:47.248 "state": "online", 00:11:47.248 "raid_level": "raid1", 00:11:47.248 "superblock": true, 00:11:47.248 "num_base_bdevs": 2, 00:11:47.248 "num_base_bdevs_discovered": 1, 00:11:47.248 "num_base_bdevs_operational": 1, 00:11:47.248 "base_bdevs_list": [ 00:11:47.248 { 00:11:47.248 "name": null, 00:11:47.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.248 "is_configured": false, 00:11:47.248 "data_offset": 0, 00:11:47.248 "data_size": 63488 00:11:47.248 }, 00:11:47.248 { 00:11:47.248 "name": "BaseBdev2", 00:11:47.248 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:47.248 "is_configured": true, 00:11:47.248 "data_offset": 2048, 00:11:47.248 "data_size": 63488 00:11:47.248 } 00:11:47.248 ] 00:11:47.248 }' 00:11:47.248 01:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.248 01:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.507 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:47.767 "name": "raid_bdev1", 00:11:47.767 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:47.767 "strip_size_kb": 0, 00:11:47.767 "state": "online", 00:11:47.767 "raid_level": "raid1", 00:11:47.767 "superblock": true, 00:11:47.767 "num_base_bdevs": 2, 00:11:47.767 "num_base_bdevs_discovered": 1, 00:11:47.767 "num_base_bdevs_operational": 1, 00:11:47.767 "base_bdevs_list": [ 00:11:47.767 { 00:11:47.767 "name": null, 00:11:47.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.767 "is_configured": false, 00:11:47.767 "data_offset": 0, 00:11:47.767 "data_size": 63488 00:11:47.767 }, 00:11:47.767 { 00:11:47.767 "name": "BaseBdev2", 00:11:47.767 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:47.767 "is_configured": true, 00:11:47.767 "data_offset": 2048, 00:11:47.767 "data_size": 63488 00:11:47.767 } 00:11:47.767 ] 00:11:47.767 }' 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.767 [2024-10-09 01:31:46.513837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:47.767 [2024-10-09 01:31:46.513898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.767 [2024-10-09 01:31:46.513923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:47.767 [2024-10-09 01:31:46.513933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.767 [2024-10-09 01:31:46.514380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.767 [2024-10-09 01:31:46.514396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:47.767 [2024-10-09 01:31:46.514477] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:47.767 [2024-10-09 01:31:46.514492] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:47.767 [2024-10-09 01:31:46.514505] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:47.767 [2024-10-09 01:31:46.514534] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:47.767 BaseBdev1 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.767 01:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.705 "name": "raid_bdev1", 00:11:48.705 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:48.705 "strip_size_kb": 0, 00:11:48.705 "state": "online", 00:11:48.705 "raid_level": "raid1", 00:11:48.705 "superblock": true, 00:11:48.705 "num_base_bdevs": 2, 00:11:48.705 "num_base_bdevs_discovered": 1, 00:11:48.705 "num_base_bdevs_operational": 1, 00:11:48.705 "base_bdevs_list": [ 00:11:48.705 { 00:11:48.705 "name": null, 00:11:48.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.705 "is_configured": false, 00:11:48.705 "data_offset": 0, 00:11:48.705 "data_size": 63488 00:11:48.705 }, 00:11:48.705 { 00:11:48.705 "name": "BaseBdev2", 00:11:48.705 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:48.705 "is_configured": true, 00:11:48.705 "data_offset": 2048, 00:11:48.705 "data_size": 63488 00:11:48.705 } 00:11:48.705 ] 00:11:48.705 }' 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.705 01:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.274 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.275 "name": "raid_bdev1", 00:11:49.275 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:49.275 "strip_size_kb": 0, 00:11:49.275 "state": "online", 00:11:49.275 "raid_level": "raid1", 00:11:49.275 "superblock": true, 00:11:49.275 "num_base_bdevs": 2, 00:11:49.275 "num_base_bdevs_discovered": 1, 00:11:49.275 "num_base_bdevs_operational": 1, 00:11:49.275 "base_bdevs_list": [ 00:11:49.275 { 00:11:49.275 "name": null, 00:11:49.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.275 "is_configured": false, 00:11:49.275 "data_offset": 0, 00:11:49.275 "data_size": 63488 00:11:49.275 }, 00:11:49.275 { 00:11:49.275 "name": "BaseBdev2", 00:11:49.275 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:49.275 "is_configured": true, 00:11:49.275 "data_offset": 2048, 00:11:49.275 "data_size": 63488 00:11:49.275 } 00:11:49.275 ] 00:11:49.275 }' 00:11:49.275 01:31:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.275 [2024-10-09 01:31:48.086298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.275 [2024-10-09 01:31:48.086568] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:49.275 [2024-10-09 01:31:48.086626] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:49.275 request: 00:11:49.275 { 00:11:49.275 "base_bdev": "BaseBdev1", 00:11:49.275 "raid_bdev": "raid_bdev1", 00:11:49.275 "method": "bdev_raid_add_base_bdev", 00:11:49.275 "req_id": 1 00:11:49.275 } 00:11:49.275 Got JSON-RPC error response 00:11:49.275 response: 00:11:49.275 { 00:11:49.275 "code": -22, 00:11:49.275 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:49.275 } 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:49.275 01:31:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.214 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.473 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.473 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.473 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.473 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.473 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.473 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.473 "name": "raid_bdev1", 00:11:50.473 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:50.474 "strip_size_kb": 0, 00:11:50.474 "state": "online", 00:11:50.474 "raid_level": "raid1", 00:11:50.474 "superblock": true, 00:11:50.474 "num_base_bdevs": 2, 00:11:50.474 "num_base_bdevs_discovered": 1, 00:11:50.474 "num_base_bdevs_operational": 1, 00:11:50.474 "base_bdevs_list": [ 00:11:50.474 { 00:11:50.474 "name": null, 00:11:50.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.474 "is_configured": false, 00:11:50.474 "data_offset": 0, 00:11:50.474 "data_size": 63488 00:11:50.474 }, 00:11:50.474 { 00:11:50.474 "name": "BaseBdev2", 00:11:50.474 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:50.474 "is_configured": true, 00:11:50.474 "data_offset": 2048, 00:11:50.474 "data_size": 63488 00:11:50.474 } 00:11:50.474 ] 00:11:50.474 }' 00:11:50.474 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.474 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.733 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.993 "name": "raid_bdev1", 00:11:50.993 "uuid": "675f093c-f0f3-43fb-832c-7036072c8f4c", 00:11:50.993 "strip_size_kb": 0, 00:11:50.993 "state": "online", 00:11:50.993 "raid_level": "raid1", 00:11:50.993 "superblock": true, 00:11:50.993 "num_base_bdevs": 2, 00:11:50.993 "num_base_bdevs_discovered": 1, 00:11:50.993 "num_base_bdevs_operational": 1, 00:11:50.993 "base_bdevs_list": [ 00:11:50.993 { 00:11:50.993 "name": null, 00:11:50.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.993 "is_configured": false, 00:11:50.993 "data_offset": 0, 00:11:50.993 "data_size": 63488 00:11:50.993 }, 00:11:50.993 { 00:11:50.993 "name": "BaseBdev2", 00:11:50.993 "uuid": "bceb92b1-4cee-5e11-ac28-1ef1827d6b23", 00:11:50.993 "is_configured": true, 00:11:50.993 "data_offset": 2048, 00:11:50.993 "data_size": 63488 00:11:50.993 } 00:11:50.993 ] 00:11:50.993 }' 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 87558 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 87558 ']' 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 87558 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87558 00:11:50.993 killing process with pid 87558 00:11:50.993 Received shutdown signal, test time was about 60.000000 seconds 00:11:50.993 00:11:50.993 Latency(us) 00:11:50.993 [2024-10-09T01:31:49.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.993 [2024-10-09T01:31:49.886Z] =================================================================================================================== 00:11:50.993 [2024-10-09T01:31:49.886Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87558' 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 87558 00:11:50.993 [2024-10-09 01:31:49.770438] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.993 [2024-10-09 01:31:49.770608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.993 01:31:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 87558 00:11:50.993 [2024-10-09 01:31:49.770675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.993 [2024-10-09 01:31:49.770690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:50.993 [2024-10-09 01:31:49.828306] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.563 ************************************ 00:11:51.563 END TEST raid_rebuild_test_sb 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:51.563 00:11:51.563 real 0m21.601s 00:11:51.563 user 0m26.693s 00:11:51.563 sys 0m3.721s 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.563 ************************************ 00:11:51.563 01:31:50 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:51.563 01:31:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:51.563 01:31:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.563 01:31:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.563 ************************************ 00:11:51.563 START TEST raid_rebuild_test_io 00:11:51.563 ************************************ 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88279 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88279 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 88279 ']' 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.563 01:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:51.563 [2024-10-09 01:31:50.370785] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:51.563 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:51.563 Zero copy mechanism will not be used. 00:11:51.563 [2024-10-09 01:31:50.370995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88279 ] 00:11:51.823 [2024-10-09 01:31:50.507584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:51.823 [2024-10-09 01:31:50.536947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.823 [2024-10-09 01:31:50.609008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.823 [2024-10-09 01:31:50.685204] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.823 [2024-10-09 01:31:50.685251] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.393 BaseBdev1_malloc 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.393 [2024-10-09 01:31:51.220199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:52.393 [2024-10-09 01:31:51.220275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.393 [2024-10-09 01:31:51.220306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:52.393 [2024-10-09 01:31:51.220323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.393 [2024-10-09 01:31:51.222778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.393 [2024-10-09 01:31:51.222816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:52.393 BaseBdev1 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.393 BaseBdev2_malloc 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.393 [2024-10-09 01:31:51.269829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:52.393 [2024-10-09 01:31:51.269936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.393 [2024-10-09 01:31:51.269976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:52.393 [2024-10-09 01:31:51.270001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.393 [2024-10-09 01:31:51.274521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.393 [2024-10-09 01:31:51.274588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.393 BaseBdev2 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.393 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.652 spare_malloc 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.652 spare_delay 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.652 [2024-10-09 01:31:51.317722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:52.652 [2024-10-09 01:31:51.317864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.652 [2024-10-09 01:31:51.317899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:52.652 [2024-10-09 01:31:51.317910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.652 [2024-10-09 01:31:51.320285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.652 [2024-10-09 01:31:51.320325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:52.652 spare 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.652 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.652 [2024-10-09 01:31:51.329774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.652 [2024-10-09 01:31:51.331910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.652 [2024-10-09 01:31:51.331993] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:52.653 [2024-10-09 01:31:51.332005] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:52.653 [2024-10-09 01:31:51.332272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:52.653 [2024-10-09 01:31:51.332400] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:52.653 [2024-10-09 01:31:51.332410] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:52.653 [2024-10-09 01:31:51.332593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.653 "name": "raid_bdev1", 00:11:52.653 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:52.653 "strip_size_kb": 0, 00:11:52.653 "state": "online", 00:11:52.653 "raid_level": "raid1", 00:11:52.653 "superblock": false, 00:11:52.653 "num_base_bdevs": 2, 00:11:52.653 "num_base_bdevs_discovered": 2, 00:11:52.653 "num_base_bdevs_operational": 2, 00:11:52.653 "base_bdevs_list": [ 00:11:52.653 { 00:11:52.653 "name": "BaseBdev1", 00:11:52.653 "uuid": "ce7736c9-3bd7-55c8-b59a-73ce57b1cc97", 00:11:52.653 "is_configured": true, 00:11:52.653 "data_offset": 0, 00:11:52.653 "data_size": 65536 00:11:52.653 }, 00:11:52.653 { 00:11:52.653 "name": "BaseBdev2", 00:11:52.653 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:52.653 "is_configured": true, 00:11:52.653 "data_offset": 0, 00:11:52.653 "data_size": 65536 00:11:52.653 } 00:11:52.653 ] 00:11:52.653 }' 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.653 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.912 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.912 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.912 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:52.912 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:53.172 [2024-10-09 01:31:51.806175] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.172 [2024-10-09 01:31:51.901904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.172 "name": "raid_bdev1", 00:11:53.172 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:53.172 "strip_size_kb": 0, 00:11:53.172 "state": "online", 00:11:53.172 "raid_level": "raid1", 00:11:53.172 "superblock": false, 00:11:53.172 "num_base_bdevs": 2, 00:11:53.172 "num_base_bdevs_discovered": 1, 00:11:53.172 "num_base_bdevs_operational": 1, 00:11:53.172 "base_bdevs_list": [ 00:11:53.172 { 00:11:53.172 "name": null, 00:11:53.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.172 "is_configured": false, 00:11:53.172 "data_offset": 0, 00:11:53.172 "data_size": 65536 00:11:53.172 }, 00:11:53.172 { 00:11:53.172 "name": "BaseBdev2", 00:11:53.172 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:53.172 "is_configured": true, 00:11:53.172 "data_offset": 0, 00:11:53.172 "data_size": 65536 00:11:53.172 } 00:11:53.172 ] 00:11:53.172 }' 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.172 01:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.172 [2024-10-09 01:31:51.985319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:53.172 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:53.172 Zero copy mechanism will not be used. 00:11:53.172 Running I/O for 60 seconds... 00:11:53.741 01:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:53.741 01:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.741 01:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:53.741 [2024-10-09 01:31:52.374816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:53.741 01:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.741 01:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:53.741 [2024-10-09 01:31:52.416595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:53.741 [2024-10-09 01:31:52.418952] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:53.741 [2024-10-09 01:31:52.532650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:53.741 [2024-10-09 01:31:52.533348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:54.001 [2024-10-09 01:31:52.748018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:54.001 [2024-10-09 01:31:52.748471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:54.261 174.00 IOPS, 522.00 MiB/s [2024-10-09T01:31:53.154Z] [2024-10-09 01:31:53.100658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:54.261 [2024-10-09 01:31:53.101410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:54.520 [2024-10-09 01:31:53.349346] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:54.520 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.520 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.520 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.520 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.520 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.520 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.520 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.520 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.780 "name": "raid_bdev1", 00:11:54.780 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:54.780 "strip_size_kb": 0, 00:11:54.780 "state": "online", 00:11:54.780 "raid_level": "raid1", 00:11:54.780 "superblock": false, 00:11:54.780 "num_base_bdevs": 2, 00:11:54.780 "num_base_bdevs_discovered": 2, 00:11:54.780 "num_base_bdevs_operational": 2, 00:11:54.780 "process": { 00:11:54.780 "type": "rebuild", 00:11:54.780 "target": "spare", 00:11:54.780 "progress": { 00:11:54.780 "blocks": 10240, 00:11:54.780 "percent": 15 00:11:54.780 } 00:11:54.780 }, 00:11:54.780 "base_bdevs_list": [ 00:11:54.780 { 00:11:54.780 "name": "spare", 00:11:54.780 "uuid": "949eddea-9517-5ad4-aed9-18b39e5a98ae", 00:11:54.780 "is_configured": true, 00:11:54.780 "data_offset": 0, 00:11:54.780 "data_size": 65536 00:11:54.780 }, 00:11:54.780 { 00:11:54.780 "name": "BaseBdev2", 00:11:54.780 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:54.780 "is_configured": true, 00:11:54.780 "data_offset": 0, 00:11:54.780 "data_size": 65536 00:11:54.780 } 00:11:54.780 ] 00:11:54.780 }' 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.780 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:54.780 [2024-10-09 01:31:53.539458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:54.781 [2024-10-09 01:31:53.576437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:54.781 [2024-10-09 01:31:53.616743] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:54.781 [2024-10-09 01:31:53.631073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.781 [2024-10-09 01:31:53.631151] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:54.781 [2024-10-09 01:31:53.631196] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:54.781 [2024-10-09 01:31:53.646764] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.781 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.040 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.040 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.040 "name": "raid_bdev1", 00:11:55.040 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:55.040 "strip_size_kb": 0, 00:11:55.040 "state": "online", 00:11:55.040 "raid_level": "raid1", 00:11:55.040 "superblock": false, 00:11:55.040 "num_base_bdevs": 2, 00:11:55.040 "num_base_bdevs_discovered": 1, 00:11:55.040 "num_base_bdevs_operational": 1, 00:11:55.040 "base_bdevs_list": [ 00:11:55.040 { 00:11:55.040 "name": null, 00:11:55.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.040 "is_configured": false, 00:11:55.040 "data_offset": 0, 00:11:55.040 "data_size": 65536 00:11:55.040 }, 00:11:55.040 { 00:11:55.040 "name": "BaseBdev2", 00:11:55.040 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:55.040 "is_configured": true, 00:11:55.040 "data_offset": 0, 00:11:55.040 "data_size": 65536 00:11:55.040 } 00:11:55.040 ] 00:11:55.040 }' 00:11:55.040 01:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.040 01:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.300 172.00 IOPS, 516.00 MiB/s [2024-10-09T01:31:54.193Z] 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.300 "name": "raid_bdev1", 00:11:55.300 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:55.300 "strip_size_kb": 0, 00:11:55.300 "state": "online", 00:11:55.300 "raid_level": "raid1", 00:11:55.300 "superblock": false, 00:11:55.300 "num_base_bdevs": 2, 00:11:55.300 "num_base_bdevs_discovered": 1, 00:11:55.300 "num_base_bdevs_operational": 1, 00:11:55.300 "base_bdevs_list": [ 00:11:55.300 { 00:11:55.300 "name": null, 00:11:55.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.300 "is_configured": false, 00:11:55.300 "data_offset": 0, 00:11:55.300 "data_size": 65536 00:11:55.300 }, 00:11:55.300 { 00:11:55.300 "name": "BaseBdev2", 00:11:55.300 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:55.300 "is_configured": true, 00:11:55.300 "data_offset": 0, 00:11:55.300 "data_size": 65536 00:11:55.300 } 00:11:55.300 ] 00:11:55.300 }' 00:11:55.300 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.560 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:55.560 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.560 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:55.560 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:55.560 01:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.560 01:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:55.560 [2024-10-09 01:31:54.252888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:55.560 01:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.560 01:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:55.560 [2024-10-09 01:31:54.295121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:55.560 [2024-10-09 01:31:54.297448] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:55.560 [2024-10-09 01:31:54.404706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:55.560 [2024-10-09 01:31:54.405268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:55.820 [2024-10-09 01:31:54.522010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:55.820 [2024-10-09 01:31:54.522366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:56.079 [2024-10-09 01:31:54.860822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:56.338 181.67 IOPS, 545.00 MiB/s [2024-10-09T01:31:55.231Z] [2024-10-09 01:31:55.088037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:56.338 [2024-10-09 01:31:55.088443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.598 [2024-10-09 01:31:55.320698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.598 "name": "raid_bdev1", 00:11:56.598 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:56.598 "strip_size_kb": 0, 00:11:56.598 "state": "online", 00:11:56.598 "raid_level": "raid1", 00:11:56.598 "superblock": false, 00:11:56.598 "num_base_bdevs": 2, 00:11:56.598 "num_base_bdevs_discovered": 2, 00:11:56.598 "num_base_bdevs_operational": 2, 00:11:56.598 "process": { 00:11:56.598 "type": "rebuild", 00:11:56.598 "target": "spare", 00:11:56.598 "progress": { 00:11:56.598 "blocks": 12288, 00:11:56.598 "percent": 18 00:11:56.598 } 00:11:56.598 }, 00:11:56.598 "base_bdevs_list": [ 00:11:56.598 { 00:11:56.598 "name": "spare", 00:11:56.598 "uuid": "949eddea-9517-5ad4-aed9-18b39e5a98ae", 00:11:56.598 "is_configured": true, 00:11:56.598 "data_offset": 0, 00:11:56.598 "data_size": 65536 00:11:56.598 }, 00:11:56.598 { 00:11:56.598 "name": "BaseBdev2", 00:11:56.598 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:56.598 "is_configured": true, 00:11:56.598 "data_offset": 0, 00:11:56.598 "data_size": 65536 00:11:56.598 } 00:11:56.598 ] 00:11:56.598 }' 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.598 [2024-10-09 01:31:55.427480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:56.598 [2024-10-09 01:31:55.427949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=331 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.598 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.598 "name": "raid_bdev1", 00:11:56.598 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:56.598 "strip_size_kb": 0, 00:11:56.598 "state": "online", 00:11:56.598 "raid_level": "raid1", 00:11:56.598 "superblock": false, 00:11:56.598 "num_base_bdevs": 2, 00:11:56.598 "num_base_bdevs_discovered": 2, 00:11:56.598 "num_base_bdevs_operational": 2, 00:11:56.598 "process": { 00:11:56.598 "type": "rebuild", 00:11:56.598 "target": "spare", 00:11:56.598 "progress": { 00:11:56.598 "blocks": 16384, 00:11:56.598 "percent": 25 00:11:56.598 } 00:11:56.598 }, 00:11:56.598 "base_bdevs_list": [ 00:11:56.598 { 00:11:56.598 "name": "spare", 00:11:56.598 "uuid": "949eddea-9517-5ad4-aed9-18b39e5a98ae", 00:11:56.598 "is_configured": true, 00:11:56.598 "data_offset": 0, 00:11:56.598 "data_size": 65536 00:11:56.598 }, 00:11:56.598 { 00:11:56.598 "name": "BaseBdev2", 00:11:56.598 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:56.598 "is_configured": true, 00:11:56.598 "data_offset": 0, 00:11:56.598 "data_size": 65536 00:11:56.598 } 00:11:56.598 ] 00:11:56.598 }' 00:11:56.858 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.858 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.858 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.858 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.858 01:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.858 [2024-10-09 01:31:55.671827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:57.117 [2024-10-09 01:31:55.788858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:57.376 155.00 IOPS, 465.00 MiB/s [2024-10-09T01:31:56.269Z] [2024-10-09 01:31:56.234339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:57.376 [2024-10-09 01:31:56.234754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:57.969 [2024-10-09 01:31:56.554965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:57.969 "name": "raid_bdev1", 00:11:57.969 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:57.969 "strip_size_kb": 0, 00:11:57.969 "state": "online", 00:11:57.969 "raid_level": "raid1", 00:11:57.969 "superblock": false, 00:11:57.969 "num_base_bdevs": 2, 00:11:57.969 "num_base_bdevs_discovered": 2, 00:11:57.969 "num_base_bdevs_operational": 2, 00:11:57.969 "process": { 00:11:57.969 "type": "rebuild", 00:11:57.969 "target": "spare", 00:11:57.969 "progress": { 00:11:57.969 "blocks": 32768, 00:11:57.969 "percent": 50 00:11:57.969 } 00:11:57.969 }, 00:11:57.969 "base_bdevs_list": [ 00:11:57.969 { 00:11:57.969 "name": "spare", 00:11:57.969 "uuid": "949eddea-9517-5ad4-aed9-18b39e5a98ae", 00:11:57.969 "is_configured": true, 00:11:57.969 "data_offset": 0, 00:11:57.969 "data_size": 65536 00:11:57.969 }, 00:11:57.969 { 00:11:57.969 "name": "BaseBdev2", 00:11:57.969 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:57.969 "is_configured": true, 00:11:57.969 "data_offset": 0, 00:11:57.969 "data_size": 65536 00:11:57.969 } 00:11:57.969 ] 00:11:57.969 }' 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.969 01:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:58.229 [2024-10-09 01:31:56.902979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:11:58.488 134.60 IOPS, 403.80 MiB/s [2024-10-09T01:31:57.381Z] [2024-10-09 01:31:57.123922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:58.747 [2024-10-09 01:31:57.472244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.006 "name": "raid_bdev1", 00:11:59.006 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:11:59.006 "strip_size_kb": 0, 00:11:59.006 "state": "online", 00:11:59.006 "raid_level": "raid1", 00:11:59.006 "superblock": false, 00:11:59.006 "num_base_bdevs": 2, 00:11:59.006 "num_base_bdevs_discovered": 2, 00:11:59.006 "num_base_bdevs_operational": 2, 00:11:59.006 "process": { 00:11:59.006 "type": "rebuild", 00:11:59.006 "target": "spare", 00:11:59.006 "progress": { 00:11:59.006 "blocks": 49152, 00:11:59.006 "percent": 75 00:11:59.006 } 00:11:59.006 }, 00:11:59.006 "base_bdevs_list": [ 00:11:59.006 { 00:11:59.006 "name": "spare", 00:11:59.006 "uuid": "949eddea-9517-5ad4-aed9-18b39e5a98ae", 00:11:59.006 "is_configured": true, 00:11:59.006 "data_offset": 0, 00:11:59.006 "data_size": 65536 00:11:59.006 }, 00:11:59.006 { 00:11:59.006 "name": "BaseBdev2", 00:11:59.006 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:11:59.006 "is_configured": true, 00:11:59.006 "data_offset": 0, 00:11:59.006 "data_size": 65536 00:11:59.006 } 00:11:59.006 ] 00:11:59.006 }' 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:59.006 01:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:59.265 118.67 IOPS, 356.00 MiB/s [2024-10-09T01:31:58.158Z] [2024-10-09 01:31:58.147016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:11:59.524 [2024-10-09 01:31:58.257322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:11:59.784 [2024-10-09 01:31:58.588936] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:00.043 [2024-10-09 01:31:58.688891] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:00.043 [2024-10-09 01:31:58.691409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.043 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.043 "name": "raid_bdev1", 00:12:00.043 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:12:00.043 "strip_size_kb": 0, 00:12:00.043 "state": "online", 00:12:00.043 "raid_level": "raid1", 00:12:00.043 "superblock": false, 00:12:00.043 "num_base_bdevs": 2, 00:12:00.043 "num_base_bdevs_discovered": 2, 00:12:00.043 "num_base_bdevs_operational": 2, 00:12:00.043 "base_bdevs_list": [ 00:12:00.043 { 00:12:00.043 "name": "spare", 00:12:00.043 "uuid": "949eddea-9517-5ad4-aed9-18b39e5a98ae", 00:12:00.043 "is_configured": true, 00:12:00.043 "data_offset": 0, 00:12:00.043 "data_size": 65536 00:12:00.043 }, 00:12:00.043 { 00:12:00.043 "name": "BaseBdev2", 00:12:00.043 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:12:00.043 "is_configured": true, 00:12:00.043 "data_offset": 0, 00:12:00.044 "data_size": 65536 00:12:00.044 } 00:12:00.044 ] 00:12:00.044 }' 00:12:00.044 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.303 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:00.303 01:31:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.304 107.86 IOPS, 323.57 MiB/s [2024-10-09T01:31:59.197Z] 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.304 "name": "raid_bdev1", 00:12:00.304 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:12:00.304 "strip_size_kb": 0, 00:12:00.304 "state": "online", 00:12:00.304 "raid_level": "raid1", 00:12:00.304 "superblock": false, 00:12:00.304 "num_base_bdevs": 2, 00:12:00.304 "num_base_bdevs_discovered": 2, 00:12:00.304 "num_base_bdevs_operational": 2, 00:12:00.304 "base_bdevs_list": [ 00:12:00.304 { 00:12:00.304 "name": "spare", 00:12:00.304 "uuid": "949eddea-9517-5ad4-aed9-18b39e5a98ae", 00:12:00.304 "is_configured": true, 00:12:00.304 "data_offset": 0, 00:12:00.304 "data_size": 65536 00:12:00.304 }, 00:12:00.304 { 00:12:00.304 "name": "BaseBdev2", 00:12:00.304 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:12:00.304 "is_configured": true, 00:12:00.304 "data_offset": 0, 00:12:00.304 "data_size": 65536 00:12:00.304 } 00:12:00.304 ] 00:12:00.304 }' 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.304 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.563 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.563 "name": "raid_bdev1", 00:12:00.563 "uuid": "c3e2013c-1f43-47e0-8fee-48748ed4aa6b", 00:12:00.563 "strip_size_kb": 0, 00:12:00.563 "state": "online", 00:12:00.563 "raid_level": "raid1", 00:12:00.563 "superblock": false, 00:12:00.563 "num_base_bdevs": 2, 00:12:00.563 "num_base_bdevs_discovered": 2, 00:12:00.563 "num_base_bdevs_operational": 2, 00:12:00.563 "base_bdevs_list": [ 00:12:00.563 { 00:12:00.563 "name": "spare", 00:12:00.564 "uuid": "949eddea-9517-5ad4-aed9-18b39e5a98ae", 00:12:00.564 "is_configured": true, 00:12:00.564 "data_offset": 0, 00:12:00.564 "data_size": 65536 00:12:00.564 }, 00:12:00.564 { 00:12:00.564 "name": "BaseBdev2", 00:12:00.564 "uuid": "c3b29c64-0810-53ba-992c-87b128fd149f", 00:12:00.564 "is_configured": true, 00:12:00.564 "data_offset": 0, 00:12:00.564 "data_size": 65536 00:12:00.564 } 00:12:00.564 ] 00:12:00.564 }' 00:12:00.564 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.564 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.823 [2024-10-09 01:31:59.610936] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.823 [2024-10-09 01:31:59.611035] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.823 00:12:00.823 Latency(us) 00:12:00.823 [2024-10-09T01:31:59.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.823 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:00.823 raid_bdev1 : 7.69 102.59 307.78 0.00 0.00 13285.74 274.90 115157.83 00:12:00.823 [2024-10-09T01:31:59.716Z] =================================================================================================================== 00:12:00.823 [2024-10-09T01:31:59.716Z] Total : 102.59 307.78 0.00 0.00 13285.74 274.90 115157.83 00:12:00.823 [2024-10-09 01:31:59.682850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.823 [2024-10-09 01:31:59.682927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.823 [2024-10-09 01:31:59.683029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.823 [2024-10-09 01:31:59.683091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:00.823 { 00:12:00.823 "results": [ 00:12:00.823 { 00:12:00.823 "job": "raid_bdev1", 00:12:00.823 "core_mask": "0x1", 00:12:00.823 "workload": "randrw", 00:12:00.823 "percentage": 50, 00:12:00.823 "status": "finished", 00:12:00.823 "queue_depth": 2, 00:12:00.823 "io_size": 3145728, 00:12:00.823 "runtime": 7.690486, 00:12:00.823 "iops": 102.59429638126902, 00:12:00.823 "mibps": 307.78288914380704, 00:12:00.823 "io_failed": 0, 00:12:00.823 "io_timeout": 0, 00:12:00.823 "avg_latency_us": 13285.743206870111, 00:12:00.823 "min_latency_us": 274.8993288590604, 00:12:00.823 "max_latency_us": 115157.82794386821 00:12:00.823 } 00:12:00.823 ], 00:12:00.823 "core_count": 1 00:12:00.823 } 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:00.823 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:01.082 /dev/nbd0 00:12:01.082 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.340 1+0 records in 00:12:01.340 1+0 records out 00:12:01.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508806 s, 8.1 MB/s 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.340 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:01.341 01:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:01.341 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:01.341 /dev/nbd1 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.600 1+0 records in 00:12:01.600 1+0 records out 00:12:01.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288175 s, 14.2 MB/s 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.600 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.859 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 88279 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 88279 ']' 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 88279 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88279 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.119 killing process with pid 88279 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88279' 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 88279 00:12:02.119 Received shutdown signal, test time was about 8.832419 seconds 00:12:02.119 00:12:02.119 Latency(us) 00:12:02.119 [2024-10-09T01:32:01.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.119 [2024-10-09T01:32:01.012Z] =================================================================================================================== 00:12:02.119 [2024-10-09T01:32:01.012Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.119 [2024-10-09 01:32:00.820864] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.119 01:32:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 88279 00:12:02.119 [2024-10-09 01:32:00.868634] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:02.378 01:32:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:02.378 00:12:02.378 real 0m10.970s 00:12:02.378 user 0m14.004s 00:12:02.378 sys 0m1.583s 00:12:02.378 01:32:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.378 01:32:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.378 ************************************ 00:12:02.378 END TEST raid_rebuild_test_io 00:12:02.378 ************************************ 00:12:02.638 01:32:01 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:02.638 01:32:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:02.638 01:32:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.639 01:32:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:02.639 ************************************ 00:12:02.639 START TEST raid_rebuild_test_sb_io 00:12:02.639 ************************************ 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88639 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88639 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 88639 ']' 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:02.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:02.639 01:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:02.639 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:02.639 Zero copy mechanism will not be used. 00:12:02.639 [2024-10-09 01:32:01.424161] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:02.639 [2024-10-09 01:32:01.424295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88639 ] 00:12:02.899 [2024-10-09 01:32:01.560334] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:02.899 [2024-10-09 01:32:01.588445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.899 [2024-10-09 01:32:01.662622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.899 [2024-10-09 01:32:01.738898] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.899 [2024-10-09 01:32:01.738946] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:03.467 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:03.467 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:03.467 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.467 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:03.467 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.467 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.467 BaseBdev1_malloc 00:12:03.467 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.467 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 [2024-10-09 01:32:02.266169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:03.468 [2024-10-09 01:32:02.266263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.468 [2024-10-09 01:32:02.266291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:03.468 [2024-10-09 01:32:02.266310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.468 [2024-10-09 01:32:02.268717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.468 [2024-10-09 01:32:02.268756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:03.468 BaseBdev1 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 BaseBdev2_malloc 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 [2024-10-09 01:32:02.317767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:03.468 [2024-10-09 01:32:02.317881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.468 [2024-10-09 01:32:02.317924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:03.468 [2024-10-09 01:32:02.317951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.468 [2024-10-09 01:32:02.322506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.468 [2024-10-09 01:32:02.322576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:03.468 BaseBdev2 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 spare_malloc 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.468 spare_delay 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.468 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.726 [2024-10-09 01:32:02.365620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:03.726 [2024-10-09 01:32:02.365677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.727 [2024-10-09 01:32:02.365697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:03.727 [2024-10-09 01:32:02.365709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.727 [2024-10-09 01:32:02.368024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.727 [2024-10-09 01:32:02.368058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:03.727 spare 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.727 [2024-10-09 01:32:02.377702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.727 [2024-10-09 01:32:02.379752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.727 [2024-10-09 01:32:02.379911] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:03.727 [2024-10-09 01:32:02.379932] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:03.727 [2024-10-09 01:32:02.380194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:03.727 [2024-10-09 01:32:02.380355] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:03.727 [2024-10-09 01:32:02.380373] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:03.727 [2024-10-09 01:32:02.380488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.727 "name": "raid_bdev1", 00:12:03.727 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:03.727 "strip_size_kb": 0, 00:12:03.727 "state": "online", 00:12:03.727 "raid_level": "raid1", 00:12:03.727 "superblock": true, 00:12:03.727 "num_base_bdevs": 2, 00:12:03.727 "num_base_bdevs_discovered": 2, 00:12:03.727 "num_base_bdevs_operational": 2, 00:12:03.727 "base_bdevs_list": [ 00:12:03.727 { 00:12:03.727 "name": "BaseBdev1", 00:12:03.727 "uuid": "180f980e-400c-59b2-b64a-ab1e21df330e", 00:12:03.727 "is_configured": true, 00:12:03.727 "data_offset": 2048, 00:12:03.727 "data_size": 63488 00:12:03.727 }, 00:12:03.727 { 00:12:03.727 "name": "BaseBdev2", 00:12:03.727 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:03.727 "is_configured": true, 00:12:03.727 "data_offset": 2048, 00:12:03.727 "data_size": 63488 00:12:03.727 } 00:12:03.727 ] 00:12:03.727 }' 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.727 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.986 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.986 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.986 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:03.986 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:03.986 [2024-10-09 01:32:02.846112] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.986 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:04.245 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.246 [2024-10-09 01:32:02.937827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.246 "name": "raid_bdev1", 00:12:04.246 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:04.246 "strip_size_kb": 0, 00:12:04.246 "state": "online", 00:12:04.246 "raid_level": "raid1", 00:12:04.246 "superblock": true, 00:12:04.246 "num_base_bdevs": 2, 00:12:04.246 "num_base_bdevs_discovered": 1, 00:12:04.246 "num_base_bdevs_operational": 1, 00:12:04.246 "base_bdevs_list": [ 00:12:04.246 { 00:12:04.246 "name": null, 00:12:04.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.246 "is_configured": false, 00:12:04.246 "data_offset": 0, 00:12:04.246 "data_size": 63488 00:12:04.246 }, 00:12:04.246 { 00:12:04.246 "name": "BaseBdev2", 00:12:04.246 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:04.246 "is_configured": true, 00:12:04.246 "data_offset": 2048, 00:12:04.246 "data_size": 63488 00:12:04.246 } 00:12:04.246 ] 00:12:04.246 }' 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.246 01:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.246 [2024-10-09 01:32:03.025262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:04.246 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:04.246 Zero copy mechanism will not be used. 00:12:04.246 Running I/O for 60 seconds... 00:12:04.505 01:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:04.505 01:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.505 01:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.505 [2024-10-09 01:32:03.392349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.765 01:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.765 01:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:04.765 [2024-10-09 01:32:03.434608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:04.765 [2024-10-09 01:32:03.436820] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.765 [2024-10-09 01:32:03.566863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:04.765 [2024-10-09 01:32:03.567604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:05.025 [2024-10-09 01:32:03.789188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:05.025 [2024-10-09 01:32:03.789431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:05.285 211.00 IOPS, 633.00 MiB/s [2024-10-09T01:32:04.178Z] [2024-10-09 01:32:04.131288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:05.557 [2024-10-09 01:32:04.252401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:05.557 [2024-10-09 01:32:04.252834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.557 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.831 "name": "raid_bdev1", 00:12:05.831 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:05.831 "strip_size_kb": 0, 00:12:05.831 "state": "online", 00:12:05.831 "raid_level": "raid1", 00:12:05.831 "superblock": true, 00:12:05.831 "num_base_bdevs": 2, 00:12:05.831 "num_base_bdevs_discovered": 2, 00:12:05.831 "num_base_bdevs_operational": 2, 00:12:05.831 "process": { 00:12:05.831 "type": "rebuild", 00:12:05.831 "target": "spare", 00:12:05.831 "progress": { 00:12:05.831 "blocks": 10240, 00:12:05.831 "percent": 16 00:12:05.831 } 00:12:05.831 }, 00:12:05.831 "base_bdevs_list": [ 00:12:05.831 { 00:12:05.831 "name": "spare", 00:12:05.831 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:05.831 "is_configured": true, 00:12:05.831 "data_offset": 2048, 00:12:05.831 "data_size": 63488 00:12:05.831 }, 00:12:05.831 { 00:12:05.831 "name": "BaseBdev2", 00:12:05.831 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:05.831 "is_configured": true, 00:12:05.831 "data_offset": 2048, 00:12:05.831 "data_size": 63488 00:12:05.831 } 00:12:05.831 ] 00:12:05.831 }' 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.831 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.831 [2024-10-09 01:32:04.586060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.831 [2024-10-09 01:32:04.606442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:05.831 [2024-10-09 01:32:04.711694] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:05.831 [2024-10-09 01:32:04.720507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.831 [2024-10-09 01:32:04.720572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.831 [2024-10-09 01:32:04.720592] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:06.091 [2024-10-09 01:32:04.741840] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006150 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.091 "name": "raid_bdev1", 00:12:06.091 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:06.091 "strip_size_kb": 0, 00:12:06.091 "state": "online", 00:12:06.091 "raid_level": "raid1", 00:12:06.091 "superblock": true, 00:12:06.091 "num_base_bdevs": 2, 00:12:06.091 "num_base_bdevs_discovered": 1, 00:12:06.091 "num_base_bdevs_operational": 1, 00:12:06.091 "base_bdevs_list": [ 00:12:06.091 { 00:12:06.091 "name": null, 00:12:06.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.091 "is_configured": false, 00:12:06.091 "data_offset": 0, 00:12:06.091 "data_size": 63488 00:12:06.091 }, 00:12:06.091 { 00:12:06.091 "name": "BaseBdev2", 00:12:06.091 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:06.091 "is_configured": true, 00:12:06.091 "data_offset": 2048, 00:12:06.091 "data_size": 63488 00:12:06.091 } 00:12:06.091 ] 00:12:06.091 }' 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.091 01:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.351 175.50 IOPS, 526.50 MiB/s [2024-10-09T01:32:05.244Z] 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.351 "name": "raid_bdev1", 00:12:06.351 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:06.351 "strip_size_kb": 0, 00:12:06.351 "state": "online", 00:12:06.351 "raid_level": "raid1", 00:12:06.351 "superblock": true, 00:12:06.351 "num_base_bdevs": 2, 00:12:06.351 "num_base_bdevs_discovered": 1, 00:12:06.351 "num_base_bdevs_operational": 1, 00:12:06.351 "base_bdevs_list": [ 00:12:06.351 { 00:12:06.351 "name": null, 00:12:06.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.351 "is_configured": false, 00:12:06.351 "data_offset": 0, 00:12:06.351 "data_size": 63488 00:12:06.351 }, 00:12:06.351 { 00:12:06.351 "name": "BaseBdev2", 00:12:06.351 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:06.351 "is_configured": true, 00:12:06.351 "data_offset": 2048, 00:12:06.351 "data_size": 63488 00:12:06.351 } 00:12:06.351 ] 00:12:06.351 }' 00:12:06.351 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.611 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.611 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.611 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.611 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:06.611 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.611 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.611 [2024-10-09 01:32:05.336323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:06.611 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.611 01:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:06.611 [2024-10-09 01:32:05.394890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:12:06.611 [2024-10-09 01:32:05.397145] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:06.870 [2024-10-09 01:32:05.510893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:06.870 [2024-10-09 01:32:05.511591] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:06.870 [2024-10-09 01:32:05.738058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:06.870 [2024-10-09 01:32:05.738457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:07.440 178.67 IOPS, 536.00 MiB/s [2024-10-09T01:32:06.333Z] [2024-10-09 01:32:06.079615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:07.440 [2024-10-09 01:32:06.194374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.700 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.700 "name": "raid_bdev1", 00:12:07.700 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:07.700 "strip_size_kb": 0, 00:12:07.700 "state": "online", 00:12:07.700 "raid_level": "raid1", 00:12:07.700 "superblock": true, 00:12:07.700 "num_base_bdevs": 2, 00:12:07.700 "num_base_bdevs_discovered": 2, 00:12:07.700 "num_base_bdevs_operational": 2, 00:12:07.700 "process": { 00:12:07.700 "type": "rebuild", 00:12:07.700 "target": "spare", 00:12:07.701 "progress": { 00:12:07.701 "blocks": 10240, 00:12:07.701 "percent": 16 00:12:07.701 } 00:12:07.701 }, 00:12:07.701 "base_bdevs_list": [ 00:12:07.701 { 00:12:07.701 "name": "spare", 00:12:07.701 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:07.701 "is_configured": true, 00:12:07.701 "data_offset": 2048, 00:12:07.701 "data_size": 63488 00:12:07.701 }, 00:12:07.701 { 00:12:07.701 "name": "BaseBdev2", 00:12:07.701 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:07.701 "is_configured": true, 00:12:07.701 "data_offset": 2048, 00:12:07.701 "data_size": 63488 00:12:07.701 } 00:12:07.701 ] 00:12:07.701 }' 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:07.701 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=342 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.701 "name": "raid_bdev1", 00:12:07.701 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:07.701 "strip_size_kb": 0, 00:12:07.701 "state": "online", 00:12:07.701 "raid_level": "raid1", 00:12:07.701 "superblock": true, 00:12:07.701 "num_base_bdevs": 2, 00:12:07.701 "num_base_bdevs_discovered": 2, 00:12:07.701 "num_base_bdevs_operational": 2, 00:12:07.701 "process": { 00:12:07.701 "type": "rebuild", 00:12:07.701 "target": "spare", 00:12:07.701 "progress": { 00:12:07.701 "blocks": 12288, 00:12:07.701 "percent": 19 00:12:07.701 } 00:12:07.701 }, 00:12:07.701 "base_bdevs_list": [ 00:12:07.701 { 00:12:07.701 "name": "spare", 00:12:07.701 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:07.701 "is_configured": true, 00:12:07.701 "data_offset": 2048, 00:12:07.701 "data_size": 63488 00:12:07.701 }, 00:12:07.701 { 00:12:07.701 "name": "BaseBdev2", 00:12:07.701 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:07.701 "is_configured": true, 00:12:07.701 "data_offset": 2048, 00:12:07.701 "data_size": 63488 00:12:07.701 } 00:12:07.701 ] 00:12:07.701 }' 00:12:07.701 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.961 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.961 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.961 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.961 01:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.221 [2024-10-09 01:32:07.027180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:08.789 153.25 IOPS, 459.75 MiB/s [2024-10-09T01:32:07.682Z] [2024-10-09 01:32:07.481811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:08.789 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.789 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.789 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.789 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.789 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.790 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.790 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.790 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.790 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.790 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.790 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.049 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.049 "name": "raid_bdev1", 00:12:09.049 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:09.049 "strip_size_kb": 0, 00:12:09.049 "state": "online", 00:12:09.050 "raid_level": "raid1", 00:12:09.050 "superblock": true, 00:12:09.050 "num_base_bdevs": 2, 00:12:09.050 "num_base_bdevs_discovered": 2, 00:12:09.050 "num_base_bdevs_operational": 2, 00:12:09.050 "process": { 00:12:09.050 "type": "rebuild", 00:12:09.050 "target": "spare", 00:12:09.050 "progress": { 00:12:09.050 "blocks": 28672, 00:12:09.050 "percent": 45 00:12:09.050 } 00:12:09.050 }, 00:12:09.050 "base_bdevs_list": [ 00:12:09.050 { 00:12:09.050 "name": "spare", 00:12:09.050 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:09.050 "is_configured": true, 00:12:09.050 "data_offset": 2048, 00:12:09.050 "data_size": 63488 00:12:09.050 }, 00:12:09.050 { 00:12:09.050 "name": "BaseBdev2", 00:12:09.050 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:09.050 "is_configured": true, 00:12:09.050 "data_offset": 2048, 00:12:09.050 "data_size": 63488 00:12:09.050 } 00:12:09.050 ] 00:12:09.050 }' 00:12:09.050 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.050 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:09.050 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.050 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.050 01:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:09.050 [2024-10-09 01:32:07.822866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:09.310 [2024-10-09 01:32:08.032338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:09.569 134.80 IOPS, 404.40 MiB/s [2024-10-09T01:32:08.462Z] [2024-10-09 01:32:08.360780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:09.829 [2024-10-09 01:32:08.580349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.089 "name": "raid_bdev1", 00:12:10.089 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:10.089 "strip_size_kb": 0, 00:12:10.089 "state": "online", 00:12:10.089 "raid_level": "raid1", 00:12:10.089 "superblock": true, 00:12:10.089 "num_base_bdevs": 2, 00:12:10.089 "num_base_bdevs_discovered": 2, 00:12:10.089 "num_base_bdevs_operational": 2, 00:12:10.089 "process": { 00:12:10.089 "type": "rebuild", 00:12:10.089 "target": "spare", 00:12:10.089 "progress": { 00:12:10.089 "blocks": 43008, 00:12:10.089 "percent": 67 00:12:10.089 } 00:12:10.089 }, 00:12:10.089 "base_bdevs_list": [ 00:12:10.089 { 00:12:10.089 "name": "spare", 00:12:10.089 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:10.089 "is_configured": true, 00:12:10.089 "data_offset": 2048, 00:12:10.089 "data_size": 63488 00:12:10.089 }, 00:12:10.089 { 00:12:10.089 "name": "BaseBdev2", 00:12:10.089 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:10.089 "is_configured": true, 00:12:10.089 "data_offset": 2048, 00:12:10.089 "data_size": 63488 00:12:10.089 } 00:12:10.089 ] 00:12:10.089 }' 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.089 01:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:10.349 122.83 IOPS, 368.50 MiB/s [2024-10-09T01:32:09.242Z] [2024-10-09 01:32:09.139649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:10.609 [2024-10-09 01:32:09.353396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.178 "name": "raid_bdev1", 00:12:11.178 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:11.178 "strip_size_kb": 0, 00:12:11.178 "state": "online", 00:12:11.178 "raid_level": "raid1", 00:12:11.178 "superblock": true, 00:12:11.178 "num_base_bdevs": 2, 00:12:11.178 "num_base_bdevs_discovered": 2, 00:12:11.178 "num_base_bdevs_operational": 2, 00:12:11.178 "process": { 00:12:11.178 "type": "rebuild", 00:12:11.178 "target": "spare", 00:12:11.178 "progress": { 00:12:11.178 "blocks": 61440, 00:12:11.178 "percent": 96 00:12:11.178 } 00:12:11.178 }, 00:12:11.178 "base_bdevs_list": [ 00:12:11.178 { 00:12:11.178 "name": "spare", 00:12:11.178 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:11.178 "is_configured": true, 00:12:11.178 "data_offset": 2048, 00:12:11.178 "data_size": 63488 00:12:11.178 }, 00:12:11.178 { 00:12:11.178 "name": "BaseBdev2", 00:12:11.178 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:11.178 "is_configured": true, 00:12:11.178 "data_offset": 2048, 00:12:11.178 "data_size": 63488 00:12:11.178 } 00:12:11.178 ] 00:12:11.178 }' 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.178 01:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.178 [2024-10-09 01:32:10.001665] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:11.178 110.57 IOPS, 331.71 MiB/s [2024-10-09T01:32:10.071Z] 01:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.178 01:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:11.437 [2024-10-09 01:32:10.106803] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:11.437 [2024-10-09 01:32:10.109589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.379 102.12 IOPS, 306.38 MiB/s [2024-10-09T01:32:11.272Z] 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.379 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.379 "name": "raid_bdev1", 00:12:12.379 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:12.380 "strip_size_kb": 0, 00:12:12.380 "state": "online", 00:12:12.380 "raid_level": "raid1", 00:12:12.380 "superblock": true, 00:12:12.380 "num_base_bdevs": 2, 00:12:12.380 "num_base_bdevs_discovered": 2, 00:12:12.380 "num_base_bdevs_operational": 2, 00:12:12.380 "base_bdevs_list": [ 00:12:12.380 { 00:12:12.380 "name": "spare", 00:12:12.380 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:12.380 "is_configured": true, 00:12:12.380 "data_offset": 2048, 00:12:12.380 "data_size": 63488 00:12:12.380 }, 00:12:12.380 { 00:12:12.380 "name": "BaseBdev2", 00:12:12.380 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:12.380 "is_configured": true, 00:12:12.380 "data_offset": 2048, 00:12:12.380 "data_size": 63488 00:12:12.380 } 00:12:12.380 ] 00:12:12.380 }' 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.380 "name": "raid_bdev1", 00:12:12.380 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:12.380 "strip_size_kb": 0, 00:12:12.380 "state": "online", 00:12:12.380 "raid_level": "raid1", 00:12:12.380 "superblock": true, 00:12:12.380 "num_base_bdevs": 2, 00:12:12.380 "num_base_bdevs_discovered": 2, 00:12:12.380 "num_base_bdevs_operational": 2, 00:12:12.380 "base_bdevs_list": [ 00:12:12.380 { 00:12:12.380 "name": "spare", 00:12:12.380 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:12.380 "is_configured": true, 00:12:12.380 "data_offset": 2048, 00:12:12.380 "data_size": 63488 00:12:12.380 }, 00:12:12.380 { 00:12:12.380 "name": "BaseBdev2", 00:12:12.380 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:12.380 "is_configured": true, 00:12:12.380 "data_offset": 2048, 00:12:12.380 "data_size": 63488 00:12:12.380 } 00:12:12.380 ] 00:12:12.380 }' 00:12:12.380 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.640 "name": "raid_bdev1", 00:12:12.640 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:12.640 "strip_size_kb": 0, 00:12:12.640 "state": "online", 00:12:12.640 "raid_level": "raid1", 00:12:12.640 "superblock": true, 00:12:12.640 "num_base_bdevs": 2, 00:12:12.640 "num_base_bdevs_discovered": 2, 00:12:12.640 "num_base_bdevs_operational": 2, 00:12:12.640 "base_bdevs_list": [ 00:12:12.640 { 00:12:12.640 "name": "spare", 00:12:12.640 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:12.640 "is_configured": true, 00:12:12.640 "data_offset": 2048, 00:12:12.640 "data_size": 63488 00:12:12.640 }, 00:12:12.640 { 00:12:12.640 "name": "BaseBdev2", 00:12:12.640 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:12.640 "is_configured": true, 00:12:12.640 "data_offset": 2048, 00:12:12.640 "data_size": 63488 00:12:12.640 } 00:12:12.640 ] 00:12:12.640 }' 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.640 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.209 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.210 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.210 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.210 [2024-10-09 01:32:11.851432] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.210 [2024-10-09 01:32:11.851559] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.210 00:12:13.210 Latency(us) 00:12:13.210 [2024-10-09T01:32:12.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.210 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:13.210 raid_bdev1 : 8.92 96.87 290.61 0.00 0.00 15261.80 282.04 116071.78 00:12:13.210 [2024-10-09T01:32:12.103Z] =================================================================================================================== 00:12:13.210 [2024-10-09T01:32:12.103Z] Total : 96.87 290.61 0.00 0.00 15261.80 282.04 116071.78 00:12:13.210 [2024-10-09 01:32:11.951205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.210 [2024-10-09 01:32:11.951287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.210 [2024-10-09 01:32:11.951387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.210 [2024-10-09 01:32:11.951437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:13.210 { 00:12:13.210 "results": [ 00:12:13.210 { 00:12:13.210 "job": "raid_bdev1", 00:12:13.210 "core_mask": "0x1", 00:12:13.210 "workload": "randrw", 00:12:13.210 "percentage": 50, 00:12:13.210 "status": "finished", 00:12:13.210 "queue_depth": 2, 00:12:13.210 "io_size": 3145728, 00:12:13.210 "runtime": 8.919035, 00:12:13.210 "iops": 96.87146647591359, 00:12:13.210 "mibps": 290.61439942774075, 00:12:13.210 "io_failed": 0, 00:12:13.210 "io_timeout": 0, 00:12:13.210 "avg_latency_us": 15261.802040862443, 00:12:13.210 "min_latency_us": 282.03957116708796, 00:12:13.210 "max_latency_us": 116071.77895929574 00:12:13.210 } 00:12:13.210 ], 00:12:13.210 "core_count": 1 00:12:13.210 } 00:12:13.210 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.210 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.210 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:13.210 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.210 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.210 01:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:13.210 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:13.470 /dev/nbd0 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.470 1+0 records in 00:12:13.470 1+0 records out 00:12:13.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444945 s, 9.2 MB/s 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:13.470 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:13.730 /dev/nbd1 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.730 1+0 records in 00:12:13.730 1+0 records out 00:12:13.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526704 s, 7.8 MB/s 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.730 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.990 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:14.250 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:14.250 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:14.250 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:14.250 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.250 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.250 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:14.250 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:14.250 01:32:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.250 [2024-10-09 01:32:13.018301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:14.250 [2024-10-09 01:32:13.018434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.250 [2024-10-09 01:32:13.018479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:14.250 [2024-10-09 01:32:13.018509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.250 [2024-10-09 01:32:13.021129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.250 [2024-10-09 01:32:13.021208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:14.250 [2024-10-09 01:32:13.021335] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:14.250 [2024-10-09 01:32:13.021410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.250 [2024-10-09 01:32:13.021586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.250 spare 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.250 [2024-10-09 01:32:13.121699] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:14.250 [2024-10-09 01:32:13.121775] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.250 [2024-10-09 01:32:13.122120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:14.250 [2024-10-09 01:32:13.122308] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:14.250 [2024-10-09 01:32:13.122349] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:14.250 [2024-10-09 01:32:13.122552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.250 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.251 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.510 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.510 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.510 "name": "raid_bdev1", 00:12:14.510 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:14.510 "strip_size_kb": 0, 00:12:14.510 "state": "online", 00:12:14.510 "raid_level": "raid1", 00:12:14.510 "superblock": true, 00:12:14.510 "num_base_bdevs": 2, 00:12:14.510 "num_base_bdevs_discovered": 2, 00:12:14.510 "num_base_bdevs_operational": 2, 00:12:14.510 "base_bdevs_list": [ 00:12:14.510 { 00:12:14.510 "name": "spare", 00:12:14.510 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:14.510 "is_configured": true, 00:12:14.510 "data_offset": 2048, 00:12:14.510 "data_size": 63488 00:12:14.510 }, 00:12:14.510 { 00:12:14.511 "name": "BaseBdev2", 00:12:14.511 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:14.511 "is_configured": true, 00:12:14.511 "data_offset": 2048, 00:12:14.511 "data_size": 63488 00:12:14.511 } 00:12:14.511 ] 00:12:14.511 }' 00:12:14.511 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.511 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.770 "name": "raid_bdev1", 00:12:14.770 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:14.770 "strip_size_kb": 0, 00:12:14.770 "state": "online", 00:12:14.770 "raid_level": "raid1", 00:12:14.770 "superblock": true, 00:12:14.770 "num_base_bdevs": 2, 00:12:14.770 "num_base_bdevs_discovered": 2, 00:12:14.770 "num_base_bdevs_operational": 2, 00:12:14.770 "base_bdevs_list": [ 00:12:14.770 { 00:12:14.770 "name": "spare", 00:12:14.770 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:14.770 "is_configured": true, 00:12:14.770 "data_offset": 2048, 00:12:14.770 "data_size": 63488 00:12:14.770 }, 00:12:14.770 { 00:12:14.770 "name": "BaseBdev2", 00:12:14.770 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:14.770 "is_configured": true, 00:12:14.770 "data_offset": 2048, 00:12:14.770 "data_size": 63488 00:12:14.770 } 00:12:14.770 ] 00:12:14.770 }' 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.770 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.030 [2024-10-09 01:32:13.750749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.030 "name": "raid_bdev1", 00:12:15.030 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:15.030 "strip_size_kb": 0, 00:12:15.030 "state": "online", 00:12:15.030 "raid_level": "raid1", 00:12:15.030 "superblock": true, 00:12:15.030 "num_base_bdevs": 2, 00:12:15.030 "num_base_bdevs_discovered": 1, 00:12:15.030 "num_base_bdevs_operational": 1, 00:12:15.030 "base_bdevs_list": [ 00:12:15.030 { 00:12:15.030 "name": null, 00:12:15.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.030 "is_configured": false, 00:12:15.030 "data_offset": 0, 00:12:15.030 "data_size": 63488 00:12:15.030 }, 00:12:15.030 { 00:12:15.030 "name": "BaseBdev2", 00:12:15.030 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:15.030 "is_configured": true, 00:12:15.030 "data_offset": 2048, 00:12:15.030 "data_size": 63488 00:12:15.030 } 00:12:15.030 ] 00:12:15.030 }' 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.030 01:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.600 01:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:15.600 01:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.600 01:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.600 [2024-10-09 01:32:14.202979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.600 [2024-10-09 01:32:14.203248] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:15.600 [2024-10-09 01:32:14.203273] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:15.600 [2024-10-09 01:32:14.203321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:15.600 [2024-10-09 01:32:14.211085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:15.600 01:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.600 01:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:15.600 [2024-10-09 01:32:14.213068] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.539 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.539 "name": "raid_bdev1", 00:12:16.539 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:16.539 "strip_size_kb": 0, 00:12:16.539 "state": "online", 00:12:16.539 "raid_level": "raid1", 00:12:16.539 "superblock": true, 00:12:16.539 "num_base_bdevs": 2, 00:12:16.539 "num_base_bdevs_discovered": 2, 00:12:16.539 "num_base_bdevs_operational": 2, 00:12:16.539 "process": { 00:12:16.539 "type": "rebuild", 00:12:16.539 "target": "spare", 00:12:16.539 "progress": { 00:12:16.539 "blocks": 20480, 00:12:16.539 "percent": 32 00:12:16.539 } 00:12:16.539 }, 00:12:16.539 "base_bdevs_list": [ 00:12:16.539 { 00:12:16.539 "name": "spare", 00:12:16.539 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:16.540 "is_configured": true, 00:12:16.540 "data_offset": 2048, 00:12:16.540 "data_size": 63488 00:12:16.540 }, 00:12:16.540 { 00:12:16.540 "name": "BaseBdev2", 00:12:16.540 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:16.540 "is_configured": true, 00:12:16.540 "data_offset": 2048, 00:12:16.540 "data_size": 63488 00:12:16.540 } 00:12:16.540 ] 00:12:16.540 }' 00:12:16.540 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.540 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.540 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.540 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.540 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:16.540 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.540 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.540 [2024-10-09 01:32:15.374730] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.540 [2024-10-09 01:32:15.422625] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:16.540 [2024-10-09 01:32:15.422728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.540 [2024-10-09 01:32:15.422764] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:16.540 [2024-10-09 01:32:15.422790] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.800 "name": "raid_bdev1", 00:12:16.800 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:16.800 "strip_size_kb": 0, 00:12:16.800 "state": "online", 00:12:16.800 "raid_level": "raid1", 00:12:16.800 "superblock": true, 00:12:16.800 "num_base_bdevs": 2, 00:12:16.800 "num_base_bdevs_discovered": 1, 00:12:16.800 "num_base_bdevs_operational": 1, 00:12:16.800 "base_bdevs_list": [ 00:12:16.800 { 00:12:16.800 "name": null, 00:12:16.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.800 "is_configured": false, 00:12:16.800 "data_offset": 0, 00:12:16.800 "data_size": 63488 00:12:16.800 }, 00:12:16.800 { 00:12:16.800 "name": "BaseBdev2", 00:12:16.800 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:16.800 "is_configured": true, 00:12:16.800 "data_offset": 2048, 00:12:16.800 "data_size": 63488 00:12:16.800 } 00:12:16.800 ] 00:12:16.800 }' 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.800 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.064 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:17.064 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.064 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.064 [2024-10-09 01:32:15.874542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:17.064 [2024-10-09 01:32:15.874663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.064 [2024-10-09 01:32:15.874701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:17.064 [2024-10-09 01:32:15.874733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.064 [2024-10-09 01:32:15.875228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.064 [2024-10-09 01:32:15.875291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:17.064 [2024-10-09 01:32:15.875379] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:17.064 [2024-10-09 01:32:15.875395] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:17.064 [2024-10-09 01:32:15.875405] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:17.064 [2024-10-09 01:32:15.875428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.064 spare 00:12:17.064 [2024-10-09 01:32:15.882009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b340 00:12:17.064 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.064 01:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:17.064 [2024-10-09 01:32:15.884142] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.015 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.275 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.275 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.275 "name": "raid_bdev1", 00:12:18.275 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:18.275 "strip_size_kb": 0, 00:12:18.275 "state": "online", 00:12:18.275 "raid_level": "raid1", 00:12:18.275 "superblock": true, 00:12:18.275 "num_base_bdevs": 2, 00:12:18.275 "num_base_bdevs_discovered": 2, 00:12:18.275 "num_base_bdevs_operational": 2, 00:12:18.275 "process": { 00:12:18.275 "type": "rebuild", 00:12:18.275 "target": "spare", 00:12:18.275 "progress": { 00:12:18.275 "blocks": 20480, 00:12:18.275 "percent": 32 00:12:18.275 } 00:12:18.275 }, 00:12:18.275 "base_bdevs_list": [ 00:12:18.275 { 00:12:18.275 "name": "spare", 00:12:18.275 "uuid": "b7ab200d-9516-5311-9dd3-3888a08452df", 00:12:18.275 "is_configured": true, 00:12:18.275 "data_offset": 2048, 00:12:18.275 "data_size": 63488 00:12:18.275 }, 00:12:18.275 { 00:12:18.275 "name": "BaseBdev2", 00:12:18.275 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:18.275 "is_configured": true, 00:12:18.275 "data_offset": 2048, 00:12:18.275 "data_size": 63488 00:12:18.275 } 00:12:18.275 ] 00:12:18.275 }' 00:12:18.275 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.275 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:18.275 01:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.275 [2024-10-09 01:32:17.038129] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.275 [2024-10-09 01:32:17.094034] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:18.275 [2024-10-09 01:32:17.094142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.275 [2024-10-09 01:32:17.094183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:18.275 [2024-10-09 01:32:17.094205] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.275 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.275 "name": "raid_bdev1", 00:12:18.275 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:18.275 "strip_size_kb": 0, 00:12:18.275 "state": "online", 00:12:18.275 "raid_level": "raid1", 00:12:18.275 "superblock": true, 00:12:18.275 "num_base_bdevs": 2, 00:12:18.275 "num_base_bdevs_discovered": 1, 00:12:18.275 "num_base_bdevs_operational": 1, 00:12:18.275 "base_bdevs_list": [ 00:12:18.275 { 00:12:18.275 "name": null, 00:12:18.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.275 "is_configured": false, 00:12:18.275 "data_offset": 0, 00:12:18.275 "data_size": 63488 00:12:18.275 }, 00:12:18.275 { 00:12:18.275 "name": "BaseBdev2", 00:12:18.276 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:18.276 "is_configured": true, 00:12:18.276 "data_offset": 2048, 00:12:18.276 "data_size": 63488 00:12:18.276 } 00:12:18.276 ] 00:12:18.276 }' 00:12:18.276 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.276 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.845 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:18.845 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:18.845 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:18.845 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:18.845 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:18.845 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.845 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.845 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:18.846 "name": "raid_bdev1", 00:12:18.846 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:18.846 "strip_size_kb": 0, 00:12:18.846 "state": "online", 00:12:18.846 "raid_level": "raid1", 00:12:18.846 "superblock": true, 00:12:18.846 "num_base_bdevs": 2, 00:12:18.846 "num_base_bdevs_discovered": 1, 00:12:18.846 "num_base_bdevs_operational": 1, 00:12:18.846 "base_bdevs_list": [ 00:12:18.846 { 00:12:18.846 "name": null, 00:12:18.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.846 "is_configured": false, 00:12:18.846 "data_offset": 0, 00:12:18.846 "data_size": 63488 00:12:18.846 }, 00:12:18.846 { 00:12:18.846 "name": "BaseBdev2", 00:12:18.846 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:18.846 "is_configured": true, 00:12:18.846 "data_offset": 2048, 00:12:18.846 "data_size": 63488 00:12:18.846 } 00:12:18.846 ] 00:12:18.846 }' 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.846 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.105 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.105 [2024-10-09 01:32:17.741907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:19.105 [2024-10-09 01:32:17.742012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.105 [2024-10-09 01:32:17.742055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:19.105 [2024-10-09 01:32:17.742085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.105 [2024-10-09 01:32:17.742576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.105 [2024-10-09 01:32:17.742595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.105 [2024-10-09 01:32:17.742674] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:19.105 [2024-10-09 01:32:17.742693] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:19.105 [2024-10-09 01:32:17.742703] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:19.105 [2024-10-09 01:32:17.742714] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:19.105 BaseBdev1 00:12:19.105 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.105 01:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.045 "name": "raid_bdev1", 00:12:20.045 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:20.045 "strip_size_kb": 0, 00:12:20.045 "state": "online", 00:12:20.045 "raid_level": "raid1", 00:12:20.045 "superblock": true, 00:12:20.045 "num_base_bdevs": 2, 00:12:20.045 "num_base_bdevs_discovered": 1, 00:12:20.045 "num_base_bdevs_operational": 1, 00:12:20.045 "base_bdevs_list": [ 00:12:20.045 { 00:12:20.045 "name": null, 00:12:20.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.045 "is_configured": false, 00:12:20.045 "data_offset": 0, 00:12:20.045 "data_size": 63488 00:12:20.045 }, 00:12:20.045 { 00:12:20.045 "name": "BaseBdev2", 00:12:20.045 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:20.045 "is_configured": true, 00:12:20.045 "data_offset": 2048, 00:12:20.045 "data_size": 63488 00:12:20.045 } 00:12:20.045 ] 00:12:20.045 }' 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.045 01:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.305 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.565 "name": "raid_bdev1", 00:12:20.565 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:20.565 "strip_size_kb": 0, 00:12:20.565 "state": "online", 00:12:20.565 "raid_level": "raid1", 00:12:20.565 "superblock": true, 00:12:20.565 "num_base_bdevs": 2, 00:12:20.565 "num_base_bdevs_discovered": 1, 00:12:20.565 "num_base_bdevs_operational": 1, 00:12:20.565 "base_bdevs_list": [ 00:12:20.565 { 00:12:20.565 "name": null, 00:12:20.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.565 "is_configured": false, 00:12:20.565 "data_offset": 0, 00:12:20.565 "data_size": 63488 00:12:20.565 }, 00:12:20.565 { 00:12:20.565 "name": "BaseBdev2", 00:12:20.565 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:20.565 "is_configured": true, 00:12:20.565 "data_offset": 2048, 00:12:20.565 "data_size": 63488 00:12:20.565 } 00:12:20.565 ] 00:12:20.565 }' 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.565 [2024-10-09 01:32:19.330508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.565 [2024-10-09 01:32:19.330691] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:20.565 [2024-10-09 01:32:19.330720] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:20.565 request: 00:12:20.565 { 00:12:20.565 "base_bdev": "BaseBdev1", 00:12:20.565 "raid_bdev": "raid_bdev1", 00:12:20.565 "method": "bdev_raid_add_base_bdev", 00:12:20.565 "req_id": 1 00:12:20.565 } 00:12:20.565 Got JSON-RPC error response 00:12:20.565 response: 00:12:20.565 { 00:12:20.565 "code": -22, 00:12:20.565 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:20.565 } 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:20.565 01:32:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.502 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.761 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.761 "name": "raid_bdev1", 00:12:21.761 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:21.761 "strip_size_kb": 0, 00:12:21.761 "state": "online", 00:12:21.761 "raid_level": "raid1", 00:12:21.761 "superblock": true, 00:12:21.761 "num_base_bdevs": 2, 00:12:21.762 "num_base_bdevs_discovered": 1, 00:12:21.762 "num_base_bdevs_operational": 1, 00:12:21.762 "base_bdevs_list": [ 00:12:21.762 { 00:12:21.762 "name": null, 00:12:21.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.762 "is_configured": false, 00:12:21.762 "data_offset": 0, 00:12:21.762 "data_size": 63488 00:12:21.762 }, 00:12:21.762 { 00:12:21.762 "name": "BaseBdev2", 00:12:21.762 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:21.762 "is_configured": true, 00:12:21.762 "data_offset": 2048, 00:12:21.762 "data_size": 63488 00:12:21.762 } 00:12:21.762 ] 00:12:21.762 }' 00:12:21.762 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.762 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.021 "name": "raid_bdev1", 00:12:22.021 "uuid": "99221f6d-e27e-455f-be31-94db08d4a322", 00:12:22.021 "strip_size_kb": 0, 00:12:22.021 "state": "online", 00:12:22.021 "raid_level": "raid1", 00:12:22.021 "superblock": true, 00:12:22.021 "num_base_bdevs": 2, 00:12:22.021 "num_base_bdevs_discovered": 1, 00:12:22.021 "num_base_bdevs_operational": 1, 00:12:22.021 "base_bdevs_list": [ 00:12:22.021 { 00:12:22.021 "name": null, 00:12:22.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.021 "is_configured": false, 00:12:22.021 "data_offset": 0, 00:12:22.021 "data_size": 63488 00:12:22.021 }, 00:12:22.021 { 00:12:22.021 "name": "BaseBdev2", 00:12:22.021 "uuid": "dc378cb5-a5d1-546f-8b6b-4ad726751ceb", 00:12:22.021 "is_configured": true, 00:12:22.021 "data_offset": 2048, 00:12:22.021 "data_size": 63488 00:12:22.021 } 00:12:22.021 ] 00:12:22.021 }' 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:22.021 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 88639 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 88639 ']' 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 88639 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88639 00:12:22.281 killing process with pid 88639 00:12:22.281 Received shutdown signal, test time was about 17.947292 seconds 00:12:22.281 00:12:22.281 Latency(us) 00:12:22.281 [2024-10-09T01:32:21.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.281 [2024-10-09T01:32:21.174Z] =================================================================================================================== 00:12:22.281 [2024-10-09T01:32:21.174Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88639' 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 88639 00:12:22.281 [2024-10-09 01:32:20.976268] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.281 [2024-10-09 01:32:20.976417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.281 01:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 88639 00:12:22.281 [2024-10-09 01:32:20.976491] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.281 [2024-10-09 01:32:20.976506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:22.281 [2024-10-09 01:32:21.024090] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.542 01:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:22.542 00:12:22.542 real 0m20.069s 00:12:22.542 user 0m26.262s 00:12:22.542 sys 0m2.362s 00:12:22.542 01:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.542 01:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.542 ************************************ 00:12:22.542 END TEST raid_rebuild_test_sb_io 00:12:22.542 ************************************ 00:12:22.802 01:32:21 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:22.802 01:32:21 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:22.802 01:32:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:22.802 01:32:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.802 01:32:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.802 ************************************ 00:12:22.802 START TEST raid_rebuild_test 00:12:22.802 ************************************ 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=89337 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 89337 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 89337 ']' 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:22.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:22.802 01:32:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.802 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:22.802 Zero copy mechanism will not be used. 00:12:22.802 [2024-10-09 01:32:21.570342] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:22.802 [2024-10-09 01:32:21.570485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89337 ] 00:12:23.062 [2024-10-09 01:32:21.706934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:23.062 [2024-10-09 01:32:21.735516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.062 [2024-10-09 01:32:21.806526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.062 [2024-10-09 01:32:21.883872] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.062 [2024-10-09 01:32:21.883917] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.632 BaseBdev1_malloc 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.632 [2024-10-09 01:32:22.411226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:23.632 [2024-10-09 01:32:22.411294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.632 [2024-10-09 01:32:22.411323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:23.632 [2024-10-09 01:32:22.411354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.632 [2024-10-09 01:32:22.413830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.632 [2024-10-09 01:32:22.413867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.632 BaseBdev1 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.632 BaseBdev2_malloc 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.632 [2024-10-09 01:32:22.457988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:23.632 [2024-10-09 01:32:22.458072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.632 [2024-10-09 01:32:22.458103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:23.632 [2024-10-09 01:32:22.458122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.632 [2024-10-09 01:32:22.461840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.632 [2024-10-09 01:32:22.461892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.632 BaseBdev2 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.632 BaseBdev3_malloc 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.632 [2024-10-09 01:32:22.492672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:23.632 [2024-10-09 01:32:22.492735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.632 [2024-10-09 01:32:22.492758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:23.632 [2024-10-09 01:32:22.492770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.632 [2024-10-09 01:32:22.495084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.632 [2024-10-09 01:32:22.495119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:23.632 BaseBdev3 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.632 BaseBdev4_malloc 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.632 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.893 [2024-10-09 01:32:22.527210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:23.893 [2024-10-09 01:32:22.527269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.893 [2024-10-09 01:32:22.527289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:23.893 [2024-10-09 01:32:22.527301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.893 [2024-10-09 01:32:22.529682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.893 [2024-10-09 01:32:22.529715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:23.893 BaseBdev4 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.893 spare_malloc 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.893 spare_delay 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.893 [2024-10-09 01:32:22.573697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:23.893 [2024-10-09 01:32:22.573752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.893 [2024-10-09 01:32:22.573771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:23.893 [2024-10-09 01:32:22.573782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.893 [2024-10-09 01:32:22.576047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.893 [2024-10-09 01:32:22.576081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:23.893 spare 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.893 [2024-10-09 01:32:22.585806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.893 [2024-10-09 01:32:22.587866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:23.893 [2024-10-09 01:32:22.587933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:23.893 [2024-10-09 01:32:22.587979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:23.893 [2024-10-09 01:32:22.588053] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:23.893 [2024-10-09 01:32:22.588070] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:23.893 [2024-10-09 01:32:22.588328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:23.893 [2024-10-09 01:32:22.588492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:23.893 [2024-10-09 01:32:22.588509] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:23.893 [2024-10-09 01:32:22.588664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.893 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.893 "name": "raid_bdev1", 00:12:23.893 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:23.894 "strip_size_kb": 0, 00:12:23.894 "state": "online", 00:12:23.894 "raid_level": "raid1", 00:12:23.894 "superblock": false, 00:12:23.894 "num_base_bdevs": 4, 00:12:23.894 "num_base_bdevs_discovered": 4, 00:12:23.894 "num_base_bdevs_operational": 4, 00:12:23.894 "base_bdevs_list": [ 00:12:23.894 { 00:12:23.894 "name": "BaseBdev1", 00:12:23.894 "uuid": "f93dd766-70c4-52c1-81eb-e73b42e37993", 00:12:23.894 "is_configured": true, 00:12:23.894 "data_offset": 0, 00:12:23.894 "data_size": 65536 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "name": "BaseBdev2", 00:12:23.894 "uuid": "61ed09ed-4c87-5f4d-b67c-4446c3358823", 00:12:23.894 "is_configured": true, 00:12:23.894 "data_offset": 0, 00:12:23.894 "data_size": 65536 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "name": "BaseBdev3", 00:12:23.894 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:23.894 "is_configured": true, 00:12:23.894 "data_offset": 0, 00:12:23.894 "data_size": 65536 00:12:23.894 }, 00:12:23.894 { 00:12:23.894 "name": "BaseBdev4", 00:12:23.894 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:23.894 "is_configured": true, 00:12:23.894 "data_offset": 0, 00:12:23.894 "data_size": 65536 00:12:23.894 } 00:12:23.894 ] 00:12:23.894 }' 00:12:23.894 01:32:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.894 01:32:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.463 [2024-10-09 01:32:23.062193] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.463 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:24.464 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.464 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:24.464 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.464 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.464 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:24.464 [2024-10-09 01:32:23.334051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:24.464 /dev/nbd0 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.723 1+0 records in 00:12:24.723 1+0 records out 00:12:24.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252903 s, 16.2 MB/s 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.723 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:24.724 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:24.724 01:32:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:30.001 65536+0 records in 00:12:30.001 65536+0 records out 00:12:30.001 33554432 bytes (34 MB, 32 MiB) copied, 5.05698 s, 6.6 MB/s 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.001 [2024-10-09 01:32:28.665404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.001 [2024-10-09 01:32:28.677495] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.001 01:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.002 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.002 01:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.002 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.002 "name": "raid_bdev1", 00:12:30.002 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:30.002 "strip_size_kb": 0, 00:12:30.002 "state": "online", 00:12:30.002 "raid_level": "raid1", 00:12:30.002 "superblock": false, 00:12:30.002 "num_base_bdevs": 4, 00:12:30.002 "num_base_bdevs_discovered": 3, 00:12:30.002 "num_base_bdevs_operational": 3, 00:12:30.002 "base_bdevs_list": [ 00:12:30.002 { 00:12:30.002 "name": null, 00:12:30.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.002 "is_configured": false, 00:12:30.002 "data_offset": 0, 00:12:30.002 "data_size": 65536 00:12:30.002 }, 00:12:30.002 { 00:12:30.002 "name": "BaseBdev2", 00:12:30.002 "uuid": "61ed09ed-4c87-5f4d-b67c-4446c3358823", 00:12:30.002 "is_configured": true, 00:12:30.002 "data_offset": 0, 00:12:30.002 "data_size": 65536 00:12:30.002 }, 00:12:30.002 { 00:12:30.002 "name": "BaseBdev3", 00:12:30.002 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:30.002 "is_configured": true, 00:12:30.002 "data_offset": 0, 00:12:30.002 "data_size": 65536 00:12:30.002 }, 00:12:30.002 { 00:12:30.002 "name": "BaseBdev4", 00:12:30.002 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:30.002 "is_configured": true, 00:12:30.002 "data_offset": 0, 00:12:30.002 "data_size": 65536 00:12:30.002 } 00:12:30.002 ] 00:12:30.002 }' 00:12:30.002 01:32:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.002 01:32:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.261 01:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.261 01:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.261 01:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.261 [2024-10-09 01:32:29.093598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.261 [2024-10-09 01:32:29.099365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:12:30.261 01:32:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.261 01:32:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:30.261 [2024-10-09 01:32:29.101493] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.641 "name": "raid_bdev1", 00:12:31.641 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:31.641 "strip_size_kb": 0, 00:12:31.641 "state": "online", 00:12:31.641 "raid_level": "raid1", 00:12:31.641 "superblock": false, 00:12:31.641 "num_base_bdevs": 4, 00:12:31.641 "num_base_bdevs_discovered": 4, 00:12:31.641 "num_base_bdevs_operational": 4, 00:12:31.641 "process": { 00:12:31.641 "type": "rebuild", 00:12:31.641 "target": "spare", 00:12:31.641 "progress": { 00:12:31.641 "blocks": 20480, 00:12:31.641 "percent": 31 00:12:31.641 } 00:12:31.641 }, 00:12:31.641 "base_bdevs_list": [ 00:12:31.641 { 00:12:31.641 "name": "spare", 00:12:31.641 "uuid": "a0150eb5-8552-55c9-9f90-5da76349204c", 00:12:31.641 "is_configured": true, 00:12:31.641 "data_offset": 0, 00:12:31.641 "data_size": 65536 00:12:31.641 }, 00:12:31.641 { 00:12:31.641 "name": "BaseBdev2", 00:12:31.641 "uuid": "61ed09ed-4c87-5f4d-b67c-4446c3358823", 00:12:31.641 "is_configured": true, 00:12:31.641 "data_offset": 0, 00:12:31.641 "data_size": 65536 00:12:31.641 }, 00:12:31.641 { 00:12:31.641 "name": "BaseBdev3", 00:12:31.641 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:31.641 "is_configured": true, 00:12:31.641 "data_offset": 0, 00:12:31.641 "data_size": 65536 00:12:31.641 }, 00:12:31.641 { 00:12:31.641 "name": "BaseBdev4", 00:12:31.641 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:31.641 "is_configured": true, 00:12:31.641 "data_offset": 0, 00:12:31.641 "data_size": 65536 00:12:31.641 } 00:12:31.641 ] 00:12:31.641 }' 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.641 [2024-10-09 01:32:30.231048] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.641 [2024-10-09 01:32:30.311813] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:31.641 [2024-10-09 01:32:30.311899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.641 [2024-10-09 01:32:30.311916] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.641 [2024-10-09 01:32:30.311929] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.641 "name": "raid_bdev1", 00:12:31.641 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:31.641 "strip_size_kb": 0, 00:12:31.641 "state": "online", 00:12:31.641 "raid_level": "raid1", 00:12:31.641 "superblock": false, 00:12:31.641 "num_base_bdevs": 4, 00:12:31.641 "num_base_bdevs_discovered": 3, 00:12:31.641 "num_base_bdevs_operational": 3, 00:12:31.641 "base_bdevs_list": [ 00:12:31.641 { 00:12:31.641 "name": null, 00:12:31.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.641 "is_configured": false, 00:12:31.641 "data_offset": 0, 00:12:31.641 "data_size": 65536 00:12:31.641 }, 00:12:31.641 { 00:12:31.641 "name": "BaseBdev2", 00:12:31.641 "uuid": "61ed09ed-4c87-5f4d-b67c-4446c3358823", 00:12:31.641 "is_configured": true, 00:12:31.641 "data_offset": 0, 00:12:31.641 "data_size": 65536 00:12:31.641 }, 00:12:31.641 { 00:12:31.641 "name": "BaseBdev3", 00:12:31.641 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:31.641 "is_configured": true, 00:12:31.641 "data_offset": 0, 00:12:31.641 "data_size": 65536 00:12:31.641 }, 00:12:31.641 { 00:12:31.641 "name": "BaseBdev4", 00:12:31.641 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:31.641 "is_configured": true, 00:12:31.641 "data_offset": 0, 00:12:31.641 "data_size": 65536 00:12:31.641 } 00:12:31.641 ] 00:12:31.641 }' 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.641 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.900 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.900 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.900 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.901 "name": "raid_bdev1", 00:12:31.901 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:31.901 "strip_size_kb": 0, 00:12:31.901 "state": "online", 00:12:31.901 "raid_level": "raid1", 00:12:31.901 "superblock": false, 00:12:31.901 "num_base_bdevs": 4, 00:12:31.901 "num_base_bdevs_discovered": 3, 00:12:31.901 "num_base_bdevs_operational": 3, 00:12:31.901 "base_bdevs_list": [ 00:12:31.901 { 00:12:31.901 "name": null, 00:12:31.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.901 "is_configured": false, 00:12:31.901 "data_offset": 0, 00:12:31.901 "data_size": 65536 00:12:31.901 }, 00:12:31.901 { 00:12:31.901 "name": "BaseBdev2", 00:12:31.901 "uuid": "61ed09ed-4c87-5f4d-b67c-4446c3358823", 00:12:31.901 "is_configured": true, 00:12:31.901 "data_offset": 0, 00:12:31.901 "data_size": 65536 00:12:31.901 }, 00:12:31.901 { 00:12:31.901 "name": "BaseBdev3", 00:12:31.901 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:31.901 "is_configured": true, 00:12:31.901 "data_offset": 0, 00:12:31.901 "data_size": 65536 00:12:31.901 }, 00:12:31.901 { 00:12:31.901 "name": "BaseBdev4", 00:12:31.901 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:31.901 "is_configured": true, 00:12:31.901 "data_offset": 0, 00:12:31.901 "data_size": 65536 00:12:31.901 } 00:12:31.901 ] 00:12:31.901 }' 00:12:31.901 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.162 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.162 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.162 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.162 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:32.162 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.162 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.162 [2024-10-09 01:32:30.866257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.162 [2024-10-09 01:32:30.871687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09f10 00:12:32.162 01:32:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.162 01:32:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:32.162 [2024-10-09 01:32:30.873892] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.122 "name": "raid_bdev1", 00:12:33.122 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:33.122 "strip_size_kb": 0, 00:12:33.122 "state": "online", 00:12:33.122 "raid_level": "raid1", 00:12:33.122 "superblock": false, 00:12:33.122 "num_base_bdevs": 4, 00:12:33.122 "num_base_bdevs_discovered": 4, 00:12:33.122 "num_base_bdevs_operational": 4, 00:12:33.122 "process": { 00:12:33.122 "type": "rebuild", 00:12:33.122 "target": "spare", 00:12:33.122 "progress": { 00:12:33.122 "blocks": 20480, 00:12:33.122 "percent": 31 00:12:33.122 } 00:12:33.122 }, 00:12:33.122 "base_bdevs_list": [ 00:12:33.122 { 00:12:33.122 "name": "spare", 00:12:33.122 "uuid": "a0150eb5-8552-55c9-9f90-5da76349204c", 00:12:33.122 "is_configured": true, 00:12:33.122 "data_offset": 0, 00:12:33.122 "data_size": 65536 00:12:33.122 }, 00:12:33.122 { 00:12:33.122 "name": "BaseBdev2", 00:12:33.122 "uuid": "61ed09ed-4c87-5f4d-b67c-4446c3358823", 00:12:33.122 "is_configured": true, 00:12:33.122 "data_offset": 0, 00:12:33.122 "data_size": 65536 00:12:33.122 }, 00:12:33.122 { 00:12:33.122 "name": "BaseBdev3", 00:12:33.122 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:33.122 "is_configured": true, 00:12:33.122 "data_offset": 0, 00:12:33.122 "data_size": 65536 00:12:33.122 }, 00:12:33.122 { 00:12:33.122 "name": "BaseBdev4", 00:12:33.122 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:33.122 "is_configured": true, 00:12:33.122 "data_offset": 0, 00:12:33.122 "data_size": 65536 00:12:33.122 } 00:12:33.122 ] 00:12:33.122 }' 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.122 01:32:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.122 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.122 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:33.122 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:33.122 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:33.122 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:33.122 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:33.122 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.122 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.382 [2024-10-09 01:32:32.015862] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:33.382 [2024-10-09 01:32:32.083482] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09f10 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.382 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.382 "name": "raid_bdev1", 00:12:33.382 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:33.382 "strip_size_kb": 0, 00:12:33.382 "state": "online", 00:12:33.382 "raid_level": "raid1", 00:12:33.382 "superblock": false, 00:12:33.382 "num_base_bdevs": 4, 00:12:33.382 "num_base_bdevs_discovered": 3, 00:12:33.382 "num_base_bdevs_operational": 3, 00:12:33.382 "process": { 00:12:33.382 "type": "rebuild", 00:12:33.382 "target": "spare", 00:12:33.382 "progress": { 00:12:33.382 "blocks": 24576, 00:12:33.382 "percent": 37 00:12:33.382 } 00:12:33.382 }, 00:12:33.382 "base_bdevs_list": [ 00:12:33.382 { 00:12:33.382 "name": "spare", 00:12:33.382 "uuid": "a0150eb5-8552-55c9-9f90-5da76349204c", 00:12:33.382 "is_configured": true, 00:12:33.382 "data_offset": 0, 00:12:33.382 "data_size": 65536 00:12:33.382 }, 00:12:33.382 { 00:12:33.382 "name": null, 00:12:33.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.382 "is_configured": false, 00:12:33.382 "data_offset": 0, 00:12:33.382 "data_size": 65536 00:12:33.382 }, 00:12:33.382 { 00:12:33.383 "name": "BaseBdev3", 00:12:33.383 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:33.383 "is_configured": true, 00:12:33.383 "data_offset": 0, 00:12:33.383 "data_size": 65536 00:12:33.383 }, 00:12:33.383 { 00:12:33.383 "name": "BaseBdev4", 00:12:33.383 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:33.383 "is_configured": true, 00:12:33.383 "data_offset": 0, 00:12:33.383 "data_size": 65536 00:12:33.383 } 00:12:33.383 ] 00:12:33.383 }' 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=368 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.383 "name": "raid_bdev1", 00:12:33.383 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:33.383 "strip_size_kb": 0, 00:12:33.383 "state": "online", 00:12:33.383 "raid_level": "raid1", 00:12:33.383 "superblock": false, 00:12:33.383 "num_base_bdevs": 4, 00:12:33.383 "num_base_bdevs_discovered": 3, 00:12:33.383 "num_base_bdevs_operational": 3, 00:12:33.383 "process": { 00:12:33.383 "type": "rebuild", 00:12:33.383 "target": "spare", 00:12:33.383 "progress": { 00:12:33.383 "blocks": 26624, 00:12:33.383 "percent": 40 00:12:33.383 } 00:12:33.383 }, 00:12:33.383 "base_bdevs_list": [ 00:12:33.383 { 00:12:33.383 "name": "spare", 00:12:33.383 "uuid": "a0150eb5-8552-55c9-9f90-5da76349204c", 00:12:33.383 "is_configured": true, 00:12:33.383 "data_offset": 0, 00:12:33.383 "data_size": 65536 00:12:33.383 }, 00:12:33.383 { 00:12:33.383 "name": null, 00:12:33.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.383 "is_configured": false, 00:12:33.383 "data_offset": 0, 00:12:33.383 "data_size": 65536 00:12:33.383 }, 00:12:33.383 { 00:12:33.383 "name": "BaseBdev3", 00:12:33.383 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:33.383 "is_configured": true, 00:12:33.383 "data_offset": 0, 00:12:33.383 "data_size": 65536 00:12:33.383 }, 00:12:33.383 { 00:12:33.383 "name": "BaseBdev4", 00:12:33.383 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:33.383 "is_configured": true, 00:12:33.383 "data_offset": 0, 00:12:33.383 "data_size": 65536 00:12:33.383 } 00:12:33.383 ] 00:12:33.383 }' 00:12:33.383 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.643 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.643 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.643 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.643 01:32:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.581 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.581 "name": "raid_bdev1", 00:12:34.581 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:34.581 "strip_size_kb": 0, 00:12:34.581 "state": "online", 00:12:34.581 "raid_level": "raid1", 00:12:34.581 "superblock": false, 00:12:34.581 "num_base_bdevs": 4, 00:12:34.581 "num_base_bdevs_discovered": 3, 00:12:34.581 "num_base_bdevs_operational": 3, 00:12:34.581 "process": { 00:12:34.581 "type": "rebuild", 00:12:34.581 "target": "spare", 00:12:34.581 "progress": { 00:12:34.581 "blocks": 49152, 00:12:34.581 "percent": 75 00:12:34.581 } 00:12:34.581 }, 00:12:34.581 "base_bdevs_list": [ 00:12:34.581 { 00:12:34.581 "name": "spare", 00:12:34.581 "uuid": "a0150eb5-8552-55c9-9f90-5da76349204c", 00:12:34.581 "is_configured": true, 00:12:34.581 "data_offset": 0, 00:12:34.581 "data_size": 65536 00:12:34.581 }, 00:12:34.582 { 00:12:34.582 "name": null, 00:12:34.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.582 "is_configured": false, 00:12:34.582 "data_offset": 0, 00:12:34.582 "data_size": 65536 00:12:34.582 }, 00:12:34.582 { 00:12:34.582 "name": "BaseBdev3", 00:12:34.582 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:34.582 "is_configured": true, 00:12:34.582 "data_offset": 0, 00:12:34.582 "data_size": 65536 00:12:34.582 }, 00:12:34.582 { 00:12:34.582 "name": "BaseBdev4", 00:12:34.582 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:34.582 "is_configured": true, 00:12:34.582 "data_offset": 0, 00:12:34.582 "data_size": 65536 00:12:34.582 } 00:12:34.582 ] 00:12:34.582 }' 00:12:34.582 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.582 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.582 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.841 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.841 01:32:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:35.409 [2024-10-09 01:32:34.100091] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:35.409 [2024-10-09 01:32:34.100180] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:35.409 [2024-10-09 01:32:34.100257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.668 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.928 "name": "raid_bdev1", 00:12:35.928 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:35.928 "strip_size_kb": 0, 00:12:35.928 "state": "online", 00:12:35.928 "raid_level": "raid1", 00:12:35.928 "superblock": false, 00:12:35.928 "num_base_bdevs": 4, 00:12:35.928 "num_base_bdevs_discovered": 3, 00:12:35.928 "num_base_bdevs_operational": 3, 00:12:35.928 "base_bdevs_list": [ 00:12:35.928 { 00:12:35.928 "name": "spare", 00:12:35.928 "uuid": "a0150eb5-8552-55c9-9f90-5da76349204c", 00:12:35.928 "is_configured": true, 00:12:35.928 "data_offset": 0, 00:12:35.928 "data_size": 65536 00:12:35.928 }, 00:12:35.928 { 00:12:35.928 "name": null, 00:12:35.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.928 "is_configured": false, 00:12:35.928 "data_offset": 0, 00:12:35.928 "data_size": 65536 00:12:35.928 }, 00:12:35.928 { 00:12:35.928 "name": "BaseBdev3", 00:12:35.928 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:35.928 "is_configured": true, 00:12:35.928 "data_offset": 0, 00:12:35.928 "data_size": 65536 00:12:35.928 }, 00:12:35.928 { 00:12:35.928 "name": "BaseBdev4", 00:12:35.928 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:35.928 "is_configured": true, 00:12:35.928 "data_offset": 0, 00:12:35.928 "data_size": 65536 00:12:35.928 } 00:12:35.928 ] 00:12:35.928 }' 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.928 "name": "raid_bdev1", 00:12:35.928 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:35.928 "strip_size_kb": 0, 00:12:35.928 "state": "online", 00:12:35.928 "raid_level": "raid1", 00:12:35.928 "superblock": false, 00:12:35.928 "num_base_bdevs": 4, 00:12:35.928 "num_base_bdevs_discovered": 3, 00:12:35.928 "num_base_bdevs_operational": 3, 00:12:35.928 "base_bdevs_list": [ 00:12:35.928 { 00:12:35.928 "name": "spare", 00:12:35.928 "uuid": "a0150eb5-8552-55c9-9f90-5da76349204c", 00:12:35.928 "is_configured": true, 00:12:35.928 "data_offset": 0, 00:12:35.928 "data_size": 65536 00:12:35.928 }, 00:12:35.928 { 00:12:35.928 "name": null, 00:12:35.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.928 "is_configured": false, 00:12:35.928 "data_offset": 0, 00:12:35.928 "data_size": 65536 00:12:35.928 }, 00:12:35.928 { 00:12:35.928 "name": "BaseBdev3", 00:12:35.928 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:35.928 "is_configured": true, 00:12:35.928 "data_offset": 0, 00:12:35.928 "data_size": 65536 00:12:35.928 }, 00:12:35.928 { 00:12:35.928 "name": "BaseBdev4", 00:12:35.928 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:35.928 "is_configured": true, 00:12:35.928 "data_offset": 0, 00:12:35.928 "data_size": 65536 00:12:35.928 } 00:12:35.928 ] 00:12:35.928 }' 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.928 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.929 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.188 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.188 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.188 "name": "raid_bdev1", 00:12:36.188 "uuid": "5858c0ae-1539-4c8c-a254-dcf3cc1c7387", 00:12:36.188 "strip_size_kb": 0, 00:12:36.188 "state": "online", 00:12:36.188 "raid_level": "raid1", 00:12:36.188 "superblock": false, 00:12:36.188 "num_base_bdevs": 4, 00:12:36.188 "num_base_bdevs_discovered": 3, 00:12:36.188 "num_base_bdevs_operational": 3, 00:12:36.188 "base_bdevs_list": [ 00:12:36.188 { 00:12:36.188 "name": "spare", 00:12:36.188 "uuid": "a0150eb5-8552-55c9-9f90-5da76349204c", 00:12:36.188 "is_configured": true, 00:12:36.188 "data_offset": 0, 00:12:36.188 "data_size": 65536 00:12:36.188 }, 00:12:36.188 { 00:12:36.188 "name": null, 00:12:36.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.188 "is_configured": false, 00:12:36.188 "data_offset": 0, 00:12:36.188 "data_size": 65536 00:12:36.188 }, 00:12:36.188 { 00:12:36.188 "name": "BaseBdev3", 00:12:36.188 "uuid": "12b8ee2a-5546-5f63-ab5a-d492d97c2cd4", 00:12:36.188 "is_configured": true, 00:12:36.188 "data_offset": 0, 00:12:36.188 "data_size": 65536 00:12:36.188 }, 00:12:36.188 { 00:12:36.189 "name": "BaseBdev4", 00:12:36.189 "uuid": "14946f15-8fcc-5338-8c09-4b7888f5bb9b", 00:12:36.189 "is_configured": true, 00:12:36.189 "data_offset": 0, 00:12:36.189 "data_size": 65536 00:12:36.189 } 00:12:36.189 ] 00:12:36.189 }' 00:12:36.189 01:32:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.189 01:32:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.448 [2024-10-09 01:32:35.182916] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.448 [2024-10-09 01:32:35.182955] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.448 [2024-10-09 01:32:35.183086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.448 [2024-10-09 01:32:35.183181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.448 [2024-10-09 01:32:35.183192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.448 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:36.707 /dev/nbd0 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.708 1+0 records in 00:12:36.708 1+0 records out 00:12:36.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375705 s, 10.9 MB/s 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.708 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:36.968 /dev/nbd1 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.968 1+0 records in 00:12:36.968 1+0 records out 00:12:36.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292851 s, 14.0 MB/s 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.968 01:32:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.227 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 89337 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 89337 ']' 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 89337 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89337 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:37.486 killing process with pid 89337 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89337' 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 89337 00:12:37.486 Received shutdown signal, test time was about 60.000000 seconds 00:12:37.486 00:12:37.486 Latency(us) 00:12:37.486 [2024-10-09T01:32:36.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.486 [2024-10-09T01:32:36.379Z] =================================================================================================================== 00:12:37.486 [2024-10-09T01:32:36.379Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:37.486 [2024-10-09 01:32:36.319120] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.486 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 89337 00:12:37.746 [2024-10-09 01:32:36.410180] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:38.006 00:12:38.006 real 0m15.303s 00:12:38.006 user 0m17.272s 00:12:38.006 sys 0m3.120s 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 ************************************ 00:12:38.006 END TEST raid_rebuild_test 00:12:38.006 ************************************ 00:12:38.006 01:32:36 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:38.006 01:32:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:38.006 01:32:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:38.006 01:32:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 ************************************ 00:12:38.006 START TEST raid_rebuild_test_sb 00:12:38.006 ************************************ 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=89761 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 89761 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 89761 ']' 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.006 01:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.266 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:38.266 Zero copy mechanism will not be used. 00:12:38.266 [2024-10-09 01:32:36.944719] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:38.266 [2024-10-09 01:32:36.944841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89761 ] 00:12:38.266 [2024-10-09 01:32:37.080217] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:38.266 [2024-10-09 01:32:37.108940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.525 [2024-10-09 01:32:37.179733] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.525 [2024-10-09 01:32:37.256384] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.525 [2024-10-09 01:32:37.256432] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 BaseBdev1_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 [2024-10-09 01:32:37.787650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:39.095 [2024-10-09 01:32:37.787748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.095 [2024-10-09 01:32:37.787775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:39.095 [2024-10-09 01:32:37.787794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.095 [2024-10-09 01:32:37.790195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.095 [2024-10-09 01:32:37.790233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:39.095 BaseBdev1 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 BaseBdev2_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 [2024-10-09 01:32:37.839930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:39.095 [2024-10-09 01:32:37.840056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.095 [2024-10-09 01:32:37.840105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:39.095 [2024-10-09 01:32:37.840136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.095 [2024-10-09 01:32:37.844468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.095 [2024-10-09 01:32:37.844561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:39.095 BaseBdev2 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 BaseBdev3_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 [2024-10-09 01:32:37.875365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:39.095 [2024-10-09 01:32:37.875430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.095 [2024-10-09 01:32:37.875454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:39.095 [2024-10-09 01:32:37.875465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.095 [2024-10-09 01:32:37.877832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.095 [2024-10-09 01:32:37.877867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:39.095 BaseBdev3 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 BaseBdev4_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 [2024-10-09 01:32:37.909922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:39.095 [2024-10-09 01:32:37.909982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.095 [2024-10-09 01:32:37.910001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:39.095 [2024-10-09 01:32:37.910013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.095 [2024-10-09 01:32:37.912327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.095 [2024-10-09 01:32:37.912361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:39.095 BaseBdev4 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 spare_malloc 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 spare_delay 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 [2024-10-09 01:32:37.956522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:39.095 [2024-10-09 01:32:37.956625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.095 [2024-10-09 01:32:37.956646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:39.095 [2024-10-09 01:32:37.956656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.095 [2024-10-09 01:32:37.959015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.095 [2024-10-09 01:32:37.959049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:39.095 spare 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.095 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 [2024-10-09 01:32:37.968647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.095 [2024-10-09 01:32:37.970783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.095 [2024-10-09 01:32:37.970851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.095 [2024-10-09 01:32:37.970898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:39.095 [2024-10-09 01:32:37.971091] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:39.095 [2024-10-09 01:32:37.971117] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.095 [2024-10-09 01:32:37.971354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:39.095 [2024-10-09 01:32:37.971537] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:39.095 [2024-10-09 01:32:37.971559] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:39.096 [2024-10-09 01:32:37.971690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.096 01:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.355 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.355 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.355 "name": "raid_bdev1", 00:12:39.355 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:39.355 "strip_size_kb": 0, 00:12:39.355 "state": "online", 00:12:39.355 "raid_level": "raid1", 00:12:39.355 "superblock": true, 00:12:39.355 "num_base_bdevs": 4, 00:12:39.355 "num_base_bdevs_discovered": 4, 00:12:39.355 "num_base_bdevs_operational": 4, 00:12:39.355 "base_bdevs_list": [ 00:12:39.355 { 00:12:39.355 "name": "BaseBdev1", 00:12:39.355 "uuid": "422c9cdc-93c2-570e-ad55-e26629e9ebc7", 00:12:39.355 "is_configured": true, 00:12:39.355 "data_offset": 2048, 00:12:39.355 "data_size": 63488 00:12:39.355 }, 00:12:39.355 { 00:12:39.355 "name": "BaseBdev2", 00:12:39.355 "uuid": "80e6c92d-123b-59dd-9080-4c43bb164cfb", 00:12:39.355 "is_configured": true, 00:12:39.355 "data_offset": 2048, 00:12:39.355 "data_size": 63488 00:12:39.355 }, 00:12:39.355 { 00:12:39.355 "name": "BaseBdev3", 00:12:39.355 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:39.355 "is_configured": true, 00:12:39.355 "data_offset": 2048, 00:12:39.355 "data_size": 63488 00:12:39.355 }, 00:12:39.355 { 00:12:39.355 "name": "BaseBdev4", 00:12:39.355 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:39.355 "is_configured": true, 00:12:39.355 "data_offset": 2048, 00:12:39.355 "data_size": 63488 00:12:39.355 } 00:12:39.355 ] 00:12:39.355 }' 00:12:39.355 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.355 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:39.615 [2024-10-09 01:32:38.421073] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:39.615 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:39.875 [2024-10-09 01:32:38.700908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:39.875 /dev/nbd0 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.875 1+0 records in 00:12:39.875 1+0 records out 00:12:39.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522312 s, 7.8 MB/s 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:39.875 01:32:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:45.166 63488+0 records in 00:12:45.166 63488+0 records out 00:12:45.166 32505856 bytes (33 MB, 31 MiB) copied, 4.73792 s, 6.9 MB/s 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:45.166 [2024-10-09 01:32:43.696868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.166 [2024-10-09 01:32:43.732959] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.166 "name": "raid_bdev1", 00:12:45.166 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:45.166 "strip_size_kb": 0, 00:12:45.166 "state": "online", 00:12:45.166 "raid_level": "raid1", 00:12:45.166 "superblock": true, 00:12:45.166 "num_base_bdevs": 4, 00:12:45.166 "num_base_bdevs_discovered": 3, 00:12:45.166 "num_base_bdevs_operational": 3, 00:12:45.166 "base_bdevs_list": [ 00:12:45.166 { 00:12:45.166 "name": null, 00:12:45.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.166 "is_configured": false, 00:12:45.166 "data_offset": 0, 00:12:45.166 "data_size": 63488 00:12:45.166 }, 00:12:45.166 { 00:12:45.166 "name": "BaseBdev2", 00:12:45.166 "uuid": "80e6c92d-123b-59dd-9080-4c43bb164cfb", 00:12:45.166 "is_configured": true, 00:12:45.166 "data_offset": 2048, 00:12:45.166 "data_size": 63488 00:12:45.166 }, 00:12:45.166 { 00:12:45.166 "name": "BaseBdev3", 00:12:45.166 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:45.166 "is_configured": true, 00:12:45.166 "data_offset": 2048, 00:12:45.166 "data_size": 63488 00:12:45.166 }, 00:12:45.166 { 00:12:45.166 "name": "BaseBdev4", 00:12:45.166 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:45.166 "is_configured": true, 00:12:45.166 "data_offset": 2048, 00:12:45.166 "data_size": 63488 00:12:45.166 } 00:12:45.166 ] 00:12:45.166 }' 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.166 01:32:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.426 01:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.426 01:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.426 01:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.426 [2024-10-09 01:32:44.185098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.426 [2024-10-09 01:32:44.191024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:12:45.426 01:32:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.426 01:32:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:45.426 [2024-10-09 01:32:44.193277] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.365 "name": "raid_bdev1", 00:12:46.365 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:46.365 "strip_size_kb": 0, 00:12:46.365 "state": "online", 00:12:46.365 "raid_level": "raid1", 00:12:46.365 "superblock": true, 00:12:46.365 "num_base_bdevs": 4, 00:12:46.365 "num_base_bdevs_discovered": 4, 00:12:46.365 "num_base_bdevs_operational": 4, 00:12:46.365 "process": { 00:12:46.365 "type": "rebuild", 00:12:46.365 "target": "spare", 00:12:46.365 "progress": { 00:12:46.365 "blocks": 20480, 00:12:46.365 "percent": 32 00:12:46.365 } 00:12:46.365 }, 00:12:46.365 "base_bdevs_list": [ 00:12:46.365 { 00:12:46.365 "name": "spare", 00:12:46.365 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:46.365 "is_configured": true, 00:12:46.365 "data_offset": 2048, 00:12:46.365 "data_size": 63488 00:12:46.365 }, 00:12:46.365 { 00:12:46.365 "name": "BaseBdev2", 00:12:46.365 "uuid": "80e6c92d-123b-59dd-9080-4c43bb164cfb", 00:12:46.365 "is_configured": true, 00:12:46.365 "data_offset": 2048, 00:12:46.365 "data_size": 63488 00:12:46.365 }, 00:12:46.365 { 00:12:46.365 "name": "BaseBdev3", 00:12:46.365 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:46.365 "is_configured": true, 00:12:46.365 "data_offset": 2048, 00:12:46.365 "data_size": 63488 00:12:46.365 }, 00:12:46.365 { 00:12:46.365 "name": "BaseBdev4", 00:12:46.365 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:46.365 "is_configured": true, 00:12:46.365 "data_offset": 2048, 00:12:46.365 "data_size": 63488 00:12:46.365 } 00:12:46.365 ] 00:12:46.365 }' 00:12:46.365 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.624 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.624 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.624 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.624 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:46.624 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.624 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.624 [2024-10-09 01:32:45.318977] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.624 [2024-10-09 01:32:45.403753] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.624 [2024-10-09 01:32:45.403878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.624 [2024-10-09 01:32:45.403933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.625 [2024-10-09 01:32:45.403962] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.625 "name": "raid_bdev1", 00:12:46.625 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:46.625 "strip_size_kb": 0, 00:12:46.625 "state": "online", 00:12:46.625 "raid_level": "raid1", 00:12:46.625 "superblock": true, 00:12:46.625 "num_base_bdevs": 4, 00:12:46.625 "num_base_bdevs_discovered": 3, 00:12:46.625 "num_base_bdevs_operational": 3, 00:12:46.625 "base_bdevs_list": [ 00:12:46.625 { 00:12:46.625 "name": null, 00:12:46.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.625 "is_configured": false, 00:12:46.625 "data_offset": 0, 00:12:46.625 "data_size": 63488 00:12:46.625 }, 00:12:46.625 { 00:12:46.625 "name": "BaseBdev2", 00:12:46.625 "uuid": "80e6c92d-123b-59dd-9080-4c43bb164cfb", 00:12:46.625 "is_configured": true, 00:12:46.625 "data_offset": 2048, 00:12:46.625 "data_size": 63488 00:12:46.625 }, 00:12:46.625 { 00:12:46.625 "name": "BaseBdev3", 00:12:46.625 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:46.625 "is_configured": true, 00:12:46.625 "data_offset": 2048, 00:12:46.625 "data_size": 63488 00:12:46.625 }, 00:12:46.625 { 00:12:46.625 "name": "BaseBdev4", 00:12:46.625 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:46.625 "is_configured": true, 00:12:46.625 "data_offset": 2048, 00:12:46.625 "data_size": 63488 00:12:46.625 } 00:12:46.625 ] 00:12:46.625 }' 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.625 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.193 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.193 "name": "raid_bdev1", 00:12:47.193 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:47.193 "strip_size_kb": 0, 00:12:47.193 "state": "online", 00:12:47.193 "raid_level": "raid1", 00:12:47.193 "superblock": true, 00:12:47.193 "num_base_bdevs": 4, 00:12:47.193 "num_base_bdevs_discovered": 3, 00:12:47.193 "num_base_bdevs_operational": 3, 00:12:47.193 "base_bdevs_list": [ 00:12:47.193 { 00:12:47.193 "name": null, 00:12:47.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.193 "is_configured": false, 00:12:47.193 "data_offset": 0, 00:12:47.193 "data_size": 63488 00:12:47.193 }, 00:12:47.193 { 00:12:47.193 "name": "BaseBdev2", 00:12:47.193 "uuid": "80e6c92d-123b-59dd-9080-4c43bb164cfb", 00:12:47.193 "is_configured": true, 00:12:47.193 "data_offset": 2048, 00:12:47.193 "data_size": 63488 00:12:47.193 }, 00:12:47.193 { 00:12:47.193 "name": "BaseBdev3", 00:12:47.193 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:47.193 "is_configured": true, 00:12:47.194 "data_offset": 2048, 00:12:47.194 "data_size": 63488 00:12:47.194 }, 00:12:47.194 { 00:12:47.194 "name": "BaseBdev4", 00:12:47.194 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:47.194 "is_configured": true, 00:12:47.194 "data_offset": 2048, 00:12:47.194 "data_size": 63488 00:12:47.194 } 00:12:47.194 ] 00:12:47.194 }' 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.194 [2024-10-09 01:32:45.962758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.194 [2024-10-09 01:32:45.968592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca36a0 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.194 01:32:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:47.194 [2024-10-09 01:32:45.970842] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.133 01:32:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.133 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.393 "name": "raid_bdev1", 00:12:48.393 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:48.393 "strip_size_kb": 0, 00:12:48.393 "state": "online", 00:12:48.393 "raid_level": "raid1", 00:12:48.393 "superblock": true, 00:12:48.393 "num_base_bdevs": 4, 00:12:48.393 "num_base_bdevs_discovered": 4, 00:12:48.393 "num_base_bdevs_operational": 4, 00:12:48.393 "process": { 00:12:48.393 "type": "rebuild", 00:12:48.393 "target": "spare", 00:12:48.393 "progress": { 00:12:48.393 "blocks": 20480, 00:12:48.393 "percent": 32 00:12:48.393 } 00:12:48.393 }, 00:12:48.393 "base_bdevs_list": [ 00:12:48.393 { 00:12:48.393 "name": "spare", 00:12:48.393 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:48.393 "is_configured": true, 00:12:48.393 "data_offset": 2048, 00:12:48.393 "data_size": 63488 00:12:48.393 }, 00:12:48.393 { 00:12:48.393 "name": "BaseBdev2", 00:12:48.393 "uuid": "80e6c92d-123b-59dd-9080-4c43bb164cfb", 00:12:48.393 "is_configured": true, 00:12:48.393 "data_offset": 2048, 00:12:48.393 "data_size": 63488 00:12:48.393 }, 00:12:48.393 { 00:12:48.393 "name": "BaseBdev3", 00:12:48.393 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:48.393 "is_configured": true, 00:12:48.393 "data_offset": 2048, 00:12:48.393 "data_size": 63488 00:12:48.393 }, 00:12:48.393 { 00:12:48.393 "name": "BaseBdev4", 00:12:48.393 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:48.393 "is_configured": true, 00:12:48.393 "data_offset": 2048, 00:12:48.393 "data_size": 63488 00:12:48.393 } 00:12:48.393 ] 00:12:48.393 }' 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:48.393 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.393 [2024-10-09 01:32:47.139272] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:48.393 [2024-10-09 01:32:47.280812] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca36a0 00:12:48.393 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.653 "name": "raid_bdev1", 00:12:48.653 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:48.653 "strip_size_kb": 0, 00:12:48.653 "state": "online", 00:12:48.653 "raid_level": "raid1", 00:12:48.653 "superblock": true, 00:12:48.653 "num_base_bdevs": 4, 00:12:48.653 "num_base_bdevs_discovered": 3, 00:12:48.653 "num_base_bdevs_operational": 3, 00:12:48.653 "process": { 00:12:48.653 "type": "rebuild", 00:12:48.653 "target": "spare", 00:12:48.653 "progress": { 00:12:48.653 "blocks": 24576, 00:12:48.653 "percent": 38 00:12:48.653 } 00:12:48.653 }, 00:12:48.653 "base_bdevs_list": [ 00:12:48.653 { 00:12:48.653 "name": "spare", 00:12:48.653 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:48.653 "is_configured": true, 00:12:48.653 "data_offset": 2048, 00:12:48.653 "data_size": 63488 00:12:48.653 }, 00:12:48.653 { 00:12:48.653 "name": null, 00:12:48.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.653 "is_configured": false, 00:12:48.653 "data_offset": 0, 00:12:48.653 "data_size": 63488 00:12:48.653 }, 00:12:48.653 { 00:12:48.653 "name": "BaseBdev3", 00:12:48.653 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:48.653 "is_configured": true, 00:12:48.653 "data_offset": 2048, 00:12:48.653 "data_size": 63488 00:12:48.653 }, 00:12:48.653 { 00:12:48.653 "name": "BaseBdev4", 00:12:48.653 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:48.653 "is_configured": true, 00:12:48.653 "data_offset": 2048, 00:12:48.653 "data_size": 63488 00:12:48.653 } 00:12:48.653 ] 00:12:48.653 }' 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=383 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.653 "name": "raid_bdev1", 00:12:48.653 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:48.653 "strip_size_kb": 0, 00:12:48.653 "state": "online", 00:12:48.653 "raid_level": "raid1", 00:12:48.653 "superblock": true, 00:12:48.653 "num_base_bdevs": 4, 00:12:48.653 "num_base_bdevs_discovered": 3, 00:12:48.653 "num_base_bdevs_operational": 3, 00:12:48.653 "process": { 00:12:48.653 "type": "rebuild", 00:12:48.653 "target": "spare", 00:12:48.653 "progress": { 00:12:48.653 "blocks": 26624, 00:12:48.653 "percent": 41 00:12:48.653 } 00:12:48.653 }, 00:12:48.653 "base_bdevs_list": [ 00:12:48.653 { 00:12:48.653 "name": "spare", 00:12:48.653 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:48.653 "is_configured": true, 00:12:48.653 "data_offset": 2048, 00:12:48.653 "data_size": 63488 00:12:48.653 }, 00:12:48.653 { 00:12:48.653 "name": null, 00:12:48.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.653 "is_configured": false, 00:12:48.653 "data_offset": 0, 00:12:48.653 "data_size": 63488 00:12:48.653 }, 00:12:48.653 { 00:12:48.653 "name": "BaseBdev3", 00:12:48.653 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:48.653 "is_configured": true, 00:12:48.653 "data_offset": 2048, 00:12:48.653 "data_size": 63488 00:12:48.653 }, 00:12:48.653 { 00:12:48.653 "name": "BaseBdev4", 00:12:48.653 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:48.653 "is_configured": true, 00:12:48.653 "data_offset": 2048, 00:12:48.653 "data_size": 63488 00:12:48.653 } 00:12:48.653 ] 00:12:48.653 }' 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.653 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.913 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.913 01:32:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.853 "name": "raid_bdev1", 00:12:49.853 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:49.853 "strip_size_kb": 0, 00:12:49.853 "state": "online", 00:12:49.853 "raid_level": "raid1", 00:12:49.853 "superblock": true, 00:12:49.853 "num_base_bdevs": 4, 00:12:49.853 "num_base_bdevs_discovered": 3, 00:12:49.853 "num_base_bdevs_operational": 3, 00:12:49.853 "process": { 00:12:49.853 "type": "rebuild", 00:12:49.853 "target": "spare", 00:12:49.853 "progress": { 00:12:49.853 "blocks": 49152, 00:12:49.853 "percent": 77 00:12:49.853 } 00:12:49.853 }, 00:12:49.853 "base_bdevs_list": [ 00:12:49.853 { 00:12:49.853 "name": "spare", 00:12:49.853 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:49.853 "is_configured": true, 00:12:49.853 "data_offset": 2048, 00:12:49.853 "data_size": 63488 00:12:49.853 }, 00:12:49.853 { 00:12:49.853 "name": null, 00:12:49.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.853 "is_configured": false, 00:12:49.853 "data_offset": 0, 00:12:49.853 "data_size": 63488 00:12:49.853 }, 00:12:49.853 { 00:12:49.853 "name": "BaseBdev3", 00:12:49.853 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:49.853 "is_configured": true, 00:12:49.853 "data_offset": 2048, 00:12:49.853 "data_size": 63488 00:12:49.853 }, 00:12:49.853 { 00:12:49.853 "name": "BaseBdev4", 00:12:49.853 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:49.853 "is_configured": true, 00:12:49.853 "data_offset": 2048, 00:12:49.853 "data_size": 63488 00:12:49.853 } 00:12:49.853 ] 00:12:49.853 }' 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.853 01:32:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.422 [2024-10-09 01:32:49.196740] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:50.422 [2024-10-09 01:32:49.196891] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:50.422 [2024-10-09 01:32:49.197035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.991 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.991 "name": "raid_bdev1", 00:12:50.991 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:50.991 "strip_size_kb": 0, 00:12:50.991 "state": "online", 00:12:50.991 "raid_level": "raid1", 00:12:50.991 "superblock": true, 00:12:50.992 "num_base_bdevs": 4, 00:12:50.992 "num_base_bdevs_discovered": 3, 00:12:50.992 "num_base_bdevs_operational": 3, 00:12:50.992 "base_bdevs_list": [ 00:12:50.992 { 00:12:50.992 "name": "spare", 00:12:50.992 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:50.992 "is_configured": true, 00:12:50.992 "data_offset": 2048, 00:12:50.992 "data_size": 63488 00:12:50.992 }, 00:12:50.992 { 00:12:50.992 "name": null, 00:12:50.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.992 "is_configured": false, 00:12:50.992 "data_offset": 0, 00:12:50.992 "data_size": 63488 00:12:50.992 }, 00:12:50.992 { 00:12:50.992 "name": "BaseBdev3", 00:12:50.992 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:50.992 "is_configured": true, 00:12:50.992 "data_offset": 2048, 00:12:50.992 "data_size": 63488 00:12:50.992 }, 00:12:50.992 { 00:12:50.992 "name": "BaseBdev4", 00:12:50.992 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:50.992 "is_configured": true, 00:12:50.992 "data_offset": 2048, 00:12:50.992 "data_size": 63488 00:12:50.992 } 00:12:50.992 ] 00:12:50.992 }' 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.992 01:32:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.250 "name": "raid_bdev1", 00:12:51.250 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:51.250 "strip_size_kb": 0, 00:12:51.250 "state": "online", 00:12:51.250 "raid_level": "raid1", 00:12:51.250 "superblock": true, 00:12:51.250 "num_base_bdevs": 4, 00:12:51.250 "num_base_bdevs_discovered": 3, 00:12:51.250 "num_base_bdevs_operational": 3, 00:12:51.250 "base_bdevs_list": [ 00:12:51.250 { 00:12:51.250 "name": "spare", 00:12:51.250 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:51.250 "is_configured": true, 00:12:51.250 "data_offset": 2048, 00:12:51.250 "data_size": 63488 00:12:51.250 }, 00:12:51.250 { 00:12:51.250 "name": null, 00:12:51.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.250 "is_configured": false, 00:12:51.250 "data_offset": 0, 00:12:51.250 "data_size": 63488 00:12:51.250 }, 00:12:51.250 { 00:12:51.250 "name": "BaseBdev3", 00:12:51.250 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:51.250 "is_configured": true, 00:12:51.250 "data_offset": 2048, 00:12:51.250 "data_size": 63488 00:12:51.250 }, 00:12:51.250 { 00:12:51.250 "name": "BaseBdev4", 00:12:51.250 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:51.250 "is_configured": true, 00:12:51.250 "data_offset": 2048, 00:12:51.250 "data_size": 63488 00:12:51.250 } 00:12:51.250 ] 00:12:51.250 }' 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.250 01:32:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.250 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.250 "name": "raid_bdev1", 00:12:51.250 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:51.250 "strip_size_kb": 0, 00:12:51.250 "state": "online", 00:12:51.250 "raid_level": "raid1", 00:12:51.250 "superblock": true, 00:12:51.250 "num_base_bdevs": 4, 00:12:51.250 "num_base_bdevs_discovered": 3, 00:12:51.250 "num_base_bdevs_operational": 3, 00:12:51.250 "base_bdevs_list": [ 00:12:51.250 { 00:12:51.250 "name": "spare", 00:12:51.250 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:51.250 "is_configured": true, 00:12:51.250 "data_offset": 2048, 00:12:51.250 "data_size": 63488 00:12:51.250 }, 00:12:51.250 { 00:12:51.250 "name": null, 00:12:51.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.250 "is_configured": false, 00:12:51.250 "data_offset": 0, 00:12:51.250 "data_size": 63488 00:12:51.250 }, 00:12:51.250 { 00:12:51.250 "name": "BaseBdev3", 00:12:51.250 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:51.250 "is_configured": true, 00:12:51.251 "data_offset": 2048, 00:12:51.251 "data_size": 63488 00:12:51.251 }, 00:12:51.251 { 00:12:51.251 "name": "BaseBdev4", 00:12:51.251 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:51.251 "is_configured": true, 00:12:51.251 "data_offset": 2048, 00:12:51.251 "data_size": 63488 00:12:51.251 } 00:12:51.251 ] 00:12:51.251 }' 00:12:51.251 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.251 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.820 [2024-10-09 01:32:50.455906] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.820 [2024-10-09 01:32:50.456012] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.820 [2024-10-09 01:32:50.456151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.820 [2024-10-09 01:32:50.456258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.820 [2024-10-09 01:32:50.456333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.820 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:51.820 /dev/nbd0 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.080 1+0 records in 00:12:52.080 1+0 records out 00:12:52.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301307 s, 13.6 MB/s 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:52.080 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:52.080 /dev/nbd1 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:52.338 1+0 records in 00:12:52.338 1+0 records out 00:12:52.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044214 s, 9.3 MB/s 00:12:52.338 01:32:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:52.338 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.339 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:52.339 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.339 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.597 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.857 [2024-10-09 01:32:51.546997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:52.857 [2024-10-09 01:32:51.547113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.857 [2024-10-09 01:32:51.547174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:52.857 [2024-10-09 01:32:51.547205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.857 [2024-10-09 01:32:51.549804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.857 [2024-10-09 01:32:51.549876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:52.857 [2024-10-09 01:32:51.549999] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:52.857 [2024-10-09 01:32:51.550078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.857 [2024-10-09 01:32:51.550247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.857 [2024-10-09 01:32:51.550389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:52.857 spare 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.857 [2024-10-09 01:32:51.650501] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:52.857 [2024-10-09 01:32:51.650598] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.857 [2024-10-09 01:32:51.650995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:12:52.857 [2024-10-09 01:32:51.651208] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:52.857 [2024-10-09 01:32:51.651250] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:52.857 [2024-10-09 01:32:51.651463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.857 "name": "raid_bdev1", 00:12:52.857 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:52.857 "strip_size_kb": 0, 00:12:52.857 "state": "online", 00:12:52.857 "raid_level": "raid1", 00:12:52.857 "superblock": true, 00:12:52.857 "num_base_bdevs": 4, 00:12:52.857 "num_base_bdevs_discovered": 3, 00:12:52.857 "num_base_bdevs_operational": 3, 00:12:52.857 "base_bdevs_list": [ 00:12:52.857 { 00:12:52.857 "name": "spare", 00:12:52.857 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:52.857 "is_configured": true, 00:12:52.857 "data_offset": 2048, 00:12:52.857 "data_size": 63488 00:12:52.857 }, 00:12:52.857 { 00:12:52.857 "name": null, 00:12:52.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.857 "is_configured": false, 00:12:52.857 "data_offset": 2048, 00:12:52.857 "data_size": 63488 00:12:52.857 }, 00:12:52.857 { 00:12:52.857 "name": "BaseBdev3", 00:12:52.857 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:52.857 "is_configured": true, 00:12:52.857 "data_offset": 2048, 00:12:52.857 "data_size": 63488 00:12:52.857 }, 00:12:52.857 { 00:12:52.857 "name": "BaseBdev4", 00:12:52.857 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:52.857 "is_configured": true, 00:12:52.857 "data_offset": 2048, 00:12:52.857 "data_size": 63488 00:12:52.857 } 00:12:52.857 ] 00:12:52.857 }' 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.857 01:32:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.426 "name": "raid_bdev1", 00:12:53.426 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:53.426 "strip_size_kb": 0, 00:12:53.426 "state": "online", 00:12:53.426 "raid_level": "raid1", 00:12:53.426 "superblock": true, 00:12:53.426 "num_base_bdevs": 4, 00:12:53.426 "num_base_bdevs_discovered": 3, 00:12:53.426 "num_base_bdevs_operational": 3, 00:12:53.426 "base_bdevs_list": [ 00:12:53.426 { 00:12:53.426 "name": "spare", 00:12:53.426 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:53.426 "is_configured": true, 00:12:53.426 "data_offset": 2048, 00:12:53.426 "data_size": 63488 00:12:53.426 }, 00:12:53.426 { 00:12:53.426 "name": null, 00:12:53.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.426 "is_configured": false, 00:12:53.426 "data_offset": 2048, 00:12:53.426 "data_size": 63488 00:12:53.426 }, 00:12:53.426 { 00:12:53.426 "name": "BaseBdev3", 00:12:53.426 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:53.426 "is_configured": true, 00:12:53.426 "data_offset": 2048, 00:12:53.426 "data_size": 63488 00:12:53.426 }, 00:12:53.426 { 00:12:53.426 "name": "BaseBdev4", 00:12:53.426 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:53.426 "is_configured": true, 00:12:53.426 "data_offset": 2048, 00:12:53.426 "data_size": 63488 00:12:53.426 } 00:12:53.426 ] 00:12:53.426 }' 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:53.426 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.427 [2024-10-09 01:32:52.299638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.427 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.686 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.686 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.686 "name": "raid_bdev1", 00:12:53.686 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:53.686 "strip_size_kb": 0, 00:12:53.686 "state": "online", 00:12:53.686 "raid_level": "raid1", 00:12:53.686 "superblock": true, 00:12:53.686 "num_base_bdevs": 4, 00:12:53.686 "num_base_bdevs_discovered": 2, 00:12:53.686 "num_base_bdevs_operational": 2, 00:12:53.686 "base_bdevs_list": [ 00:12:53.686 { 00:12:53.686 "name": null, 00:12:53.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.686 "is_configured": false, 00:12:53.686 "data_offset": 0, 00:12:53.686 "data_size": 63488 00:12:53.686 }, 00:12:53.686 { 00:12:53.686 "name": null, 00:12:53.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.686 "is_configured": false, 00:12:53.686 "data_offset": 2048, 00:12:53.686 "data_size": 63488 00:12:53.686 }, 00:12:53.686 { 00:12:53.686 "name": "BaseBdev3", 00:12:53.686 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:53.686 "is_configured": true, 00:12:53.686 "data_offset": 2048, 00:12:53.686 "data_size": 63488 00:12:53.686 }, 00:12:53.686 { 00:12:53.686 "name": "BaseBdev4", 00:12:53.686 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:53.686 "is_configured": true, 00:12:53.686 "data_offset": 2048, 00:12:53.686 "data_size": 63488 00:12:53.686 } 00:12:53.686 ] 00:12:53.686 }' 00:12:53.686 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.686 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.945 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.945 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.945 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.945 [2024-10-09 01:32:52.755812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.945 [2024-10-09 01:32:52.756125] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:53.945 [2024-10-09 01:32:52.756199] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:53.945 [2024-10-09 01:32:52.756288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.945 [2024-10-09 01:32:52.762037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:12:53.945 01:32:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.945 01:32:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:53.945 [2024-10-09 01:32:52.764125] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.885 01:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.145 "name": "raid_bdev1", 00:12:55.145 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:55.145 "strip_size_kb": 0, 00:12:55.145 "state": "online", 00:12:55.145 "raid_level": "raid1", 00:12:55.145 "superblock": true, 00:12:55.145 "num_base_bdevs": 4, 00:12:55.145 "num_base_bdevs_discovered": 3, 00:12:55.145 "num_base_bdevs_operational": 3, 00:12:55.145 "process": { 00:12:55.145 "type": "rebuild", 00:12:55.145 "target": "spare", 00:12:55.145 "progress": { 00:12:55.145 "blocks": 20480, 00:12:55.145 "percent": 32 00:12:55.145 } 00:12:55.145 }, 00:12:55.145 "base_bdevs_list": [ 00:12:55.145 { 00:12:55.145 "name": "spare", 00:12:55.145 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:55.145 "is_configured": true, 00:12:55.145 "data_offset": 2048, 00:12:55.145 "data_size": 63488 00:12:55.145 }, 00:12:55.145 { 00:12:55.145 "name": null, 00:12:55.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.145 "is_configured": false, 00:12:55.145 "data_offset": 2048, 00:12:55.145 "data_size": 63488 00:12:55.145 }, 00:12:55.145 { 00:12:55.145 "name": "BaseBdev3", 00:12:55.145 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:55.145 "is_configured": true, 00:12:55.145 "data_offset": 2048, 00:12:55.145 "data_size": 63488 00:12:55.145 }, 00:12:55.145 { 00:12:55.145 "name": "BaseBdev4", 00:12:55.145 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:55.145 "is_configured": true, 00:12:55.145 "data_offset": 2048, 00:12:55.145 "data_size": 63488 00:12:55.145 } 00:12:55.145 ] 00:12:55.145 }' 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.145 [2024-10-09 01:32:53.911899] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.145 [2024-10-09 01:32:53.974188] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:55.145 [2024-10-09 01:32:53.974293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.145 [2024-10-09 01:32:53.974329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.145 [2024-10-09 01:32:53.974349] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.145 01:32:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.145 01:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.405 01:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.405 "name": "raid_bdev1", 00:12:55.405 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:55.405 "strip_size_kb": 0, 00:12:55.405 "state": "online", 00:12:55.405 "raid_level": "raid1", 00:12:55.405 "superblock": true, 00:12:55.405 "num_base_bdevs": 4, 00:12:55.405 "num_base_bdevs_discovered": 2, 00:12:55.405 "num_base_bdevs_operational": 2, 00:12:55.405 "base_bdevs_list": [ 00:12:55.405 { 00:12:55.405 "name": null, 00:12:55.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.405 "is_configured": false, 00:12:55.405 "data_offset": 0, 00:12:55.405 "data_size": 63488 00:12:55.405 }, 00:12:55.405 { 00:12:55.405 "name": null, 00:12:55.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.405 "is_configured": false, 00:12:55.405 "data_offset": 2048, 00:12:55.405 "data_size": 63488 00:12:55.405 }, 00:12:55.405 { 00:12:55.405 "name": "BaseBdev3", 00:12:55.405 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:55.405 "is_configured": true, 00:12:55.405 "data_offset": 2048, 00:12:55.405 "data_size": 63488 00:12:55.405 }, 00:12:55.405 { 00:12:55.405 "name": "BaseBdev4", 00:12:55.405 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:55.405 "is_configured": true, 00:12:55.405 "data_offset": 2048, 00:12:55.405 "data_size": 63488 00:12:55.405 } 00:12:55.405 ] 00:12:55.405 }' 00:12:55.405 01:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.405 01:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.665 01:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:55.665 01:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.665 01:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.665 [2024-10-09 01:32:54.381381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:55.665 [2024-10-09 01:32:54.381456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.665 [2024-10-09 01:32:54.381487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:55.665 [2024-10-09 01:32:54.381497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.665 [2024-10-09 01:32:54.382077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.665 [2024-10-09 01:32:54.382103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:55.665 [2024-10-09 01:32:54.382224] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:55.665 [2024-10-09 01:32:54.382238] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:55.665 [2024-10-09 01:32:54.382262] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:55.665 [2024-10-09 01:32:54.382296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.665 [2024-10-09 01:32:54.387956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:12:55.665 spare 00:12:55.665 01:32:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.665 01:32:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:55.665 [2024-10-09 01:32:54.390239] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.604 "name": "raid_bdev1", 00:12:56.604 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:56.604 "strip_size_kb": 0, 00:12:56.604 "state": "online", 00:12:56.604 "raid_level": "raid1", 00:12:56.604 "superblock": true, 00:12:56.604 "num_base_bdevs": 4, 00:12:56.604 "num_base_bdevs_discovered": 3, 00:12:56.604 "num_base_bdevs_operational": 3, 00:12:56.604 "process": { 00:12:56.604 "type": "rebuild", 00:12:56.604 "target": "spare", 00:12:56.604 "progress": { 00:12:56.604 "blocks": 20480, 00:12:56.604 "percent": 32 00:12:56.604 } 00:12:56.604 }, 00:12:56.604 "base_bdevs_list": [ 00:12:56.604 { 00:12:56.604 "name": "spare", 00:12:56.604 "uuid": "9fc05918-a134-5720-9176-8dc62204e358", 00:12:56.604 "is_configured": true, 00:12:56.604 "data_offset": 2048, 00:12:56.604 "data_size": 63488 00:12:56.604 }, 00:12:56.604 { 00:12:56.604 "name": null, 00:12:56.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.604 "is_configured": false, 00:12:56.604 "data_offset": 2048, 00:12:56.604 "data_size": 63488 00:12:56.604 }, 00:12:56.604 { 00:12:56.604 "name": "BaseBdev3", 00:12:56.604 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:56.604 "is_configured": true, 00:12:56.604 "data_offset": 2048, 00:12:56.604 "data_size": 63488 00:12:56.604 }, 00:12:56.604 { 00:12:56.604 "name": "BaseBdev4", 00:12:56.604 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:56.604 "is_configured": true, 00:12:56.604 "data_offset": 2048, 00:12:56.604 "data_size": 63488 00:12:56.604 } 00:12:56.604 ] 00:12:56.604 }' 00:12:56.604 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.864 [2024-10-09 01:32:55.524682] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.864 [2024-10-09 01:32:55.600577] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.864 [2024-10-09 01:32:55.600710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.864 [2024-10-09 01:32:55.600749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.864 [2024-10-09 01:32:55.600773] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.864 "name": "raid_bdev1", 00:12:56.864 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:56.864 "strip_size_kb": 0, 00:12:56.864 "state": "online", 00:12:56.864 "raid_level": "raid1", 00:12:56.864 "superblock": true, 00:12:56.864 "num_base_bdevs": 4, 00:12:56.864 "num_base_bdevs_discovered": 2, 00:12:56.864 "num_base_bdevs_operational": 2, 00:12:56.864 "base_bdevs_list": [ 00:12:56.864 { 00:12:56.864 "name": null, 00:12:56.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.864 "is_configured": false, 00:12:56.864 "data_offset": 0, 00:12:56.864 "data_size": 63488 00:12:56.864 }, 00:12:56.864 { 00:12:56.864 "name": null, 00:12:56.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.864 "is_configured": false, 00:12:56.864 "data_offset": 2048, 00:12:56.864 "data_size": 63488 00:12:56.864 }, 00:12:56.864 { 00:12:56.864 "name": "BaseBdev3", 00:12:56.864 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:56.864 "is_configured": true, 00:12:56.864 "data_offset": 2048, 00:12:56.864 "data_size": 63488 00:12:56.864 }, 00:12:56.864 { 00:12:56.864 "name": "BaseBdev4", 00:12:56.864 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:56.864 "is_configured": true, 00:12:56.864 "data_offset": 2048, 00:12:56.864 "data_size": 63488 00:12:56.864 } 00:12:56.864 ] 00:12:56.864 }' 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.864 01:32:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.434 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.434 "name": "raid_bdev1", 00:12:57.434 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:57.434 "strip_size_kb": 0, 00:12:57.434 "state": "online", 00:12:57.434 "raid_level": "raid1", 00:12:57.434 "superblock": true, 00:12:57.434 "num_base_bdevs": 4, 00:12:57.434 "num_base_bdevs_discovered": 2, 00:12:57.434 "num_base_bdevs_operational": 2, 00:12:57.434 "base_bdevs_list": [ 00:12:57.434 { 00:12:57.434 "name": null, 00:12:57.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.434 "is_configured": false, 00:12:57.434 "data_offset": 0, 00:12:57.434 "data_size": 63488 00:12:57.434 }, 00:12:57.434 { 00:12:57.434 "name": null, 00:12:57.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.434 "is_configured": false, 00:12:57.434 "data_offset": 2048, 00:12:57.434 "data_size": 63488 00:12:57.434 }, 00:12:57.434 { 00:12:57.434 "name": "BaseBdev3", 00:12:57.434 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:57.434 "is_configured": true, 00:12:57.434 "data_offset": 2048, 00:12:57.434 "data_size": 63488 00:12:57.434 }, 00:12:57.434 { 00:12:57.434 "name": "BaseBdev4", 00:12:57.434 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:57.434 "is_configured": true, 00:12:57.434 "data_offset": 2048, 00:12:57.435 "data_size": 63488 00:12:57.435 } 00:12:57.435 ] 00:12:57.435 }' 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.435 [2024-10-09 01:32:56.195866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:57.435 [2024-10-09 01:32:56.195971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.435 [2024-10-09 01:32:56.196008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:57.435 [2024-10-09 01:32:56.196038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.435 [2024-10-09 01:32:56.196553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.435 [2024-10-09 01:32:56.196647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:57.435 [2024-10-09 01:32:56.196761] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:57.435 [2024-10-09 01:32:56.196807] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:57.435 [2024-10-09 01:32:56.196850] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:57.435 [2024-10-09 01:32:56.196888] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:57.435 BaseBdev1 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.435 01:32:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.374 "name": "raid_bdev1", 00:12:58.374 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:58.374 "strip_size_kb": 0, 00:12:58.374 "state": "online", 00:12:58.374 "raid_level": "raid1", 00:12:58.374 "superblock": true, 00:12:58.374 "num_base_bdevs": 4, 00:12:58.374 "num_base_bdevs_discovered": 2, 00:12:58.374 "num_base_bdevs_operational": 2, 00:12:58.374 "base_bdevs_list": [ 00:12:58.374 { 00:12:58.374 "name": null, 00:12:58.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.374 "is_configured": false, 00:12:58.374 "data_offset": 0, 00:12:58.374 "data_size": 63488 00:12:58.374 }, 00:12:58.374 { 00:12:58.374 "name": null, 00:12:58.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.374 "is_configured": false, 00:12:58.374 "data_offset": 2048, 00:12:58.374 "data_size": 63488 00:12:58.374 }, 00:12:58.374 { 00:12:58.374 "name": "BaseBdev3", 00:12:58.374 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:58.374 "is_configured": true, 00:12:58.374 "data_offset": 2048, 00:12:58.374 "data_size": 63488 00:12:58.374 }, 00:12:58.374 { 00:12:58.374 "name": "BaseBdev4", 00:12:58.374 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:58.374 "is_configured": true, 00:12:58.374 "data_offset": 2048, 00:12:58.374 "data_size": 63488 00:12:58.374 } 00:12:58.374 ] 00:12:58.374 }' 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.374 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.944 "name": "raid_bdev1", 00:12:58.944 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:12:58.944 "strip_size_kb": 0, 00:12:58.944 "state": "online", 00:12:58.944 "raid_level": "raid1", 00:12:58.944 "superblock": true, 00:12:58.944 "num_base_bdevs": 4, 00:12:58.944 "num_base_bdevs_discovered": 2, 00:12:58.944 "num_base_bdevs_operational": 2, 00:12:58.944 "base_bdevs_list": [ 00:12:58.944 { 00:12:58.944 "name": null, 00:12:58.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.944 "is_configured": false, 00:12:58.944 "data_offset": 0, 00:12:58.944 "data_size": 63488 00:12:58.944 }, 00:12:58.944 { 00:12:58.944 "name": null, 00:12:58.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.944 "is_configured": false, 00:12:58.944 "data_offset": 2048, 00:12:58.944 "data_size": 63488 00:12:58.944 }, 00:12:58.944 { 00:12:58.944 "name": "BaseBdev3", 00:12:58.944 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:12:58.944 "is_configured": true, 00:12:58.944 "data_offset": 2048, 00:12:58.944 "data_size": 63488 00:12:58.944 }, 00:12:58.944 { 00:12:58.944 "name": "BaseBdev4", 00:12:58.944 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:12:58.944 "is_configured": true, 00:12:58.944 "data_offset": 2048, 00:12:58.944 "data_size": 63488 00:12:58.944 } 00:12:58.944 ] 00:12:58.944 }' 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.944 [2024-10-09 01:32:57.780316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.944 [2024-10-09 01:32:57.780611] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:58.944 [2024-10-09 01:32:57.780714] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:58.944 request: 00:12:58.944 { 00:12:58.944 "base_bdev": "BaseBdev1", 00:12:58.944 "raid_bdev": "raid_bdev1", 00:12:58.944 "method": "bdev_raid_add_base_bdev", 00:12:58.944 "req_id": 1 00:12:58.944 } 00:12:58.944 Got JSON-RPC error response 00:12:58.944 response: 00:12:58.944 { 00:12:58.944 "code": -22, 00:12:58.944 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:58.944 } 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.944 01:32:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.338 "name": "raid_bdev1", 00:13:00.338 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:13:00.338 "strip_size_kb": 0, 00:13:00.338 "state": "online", 00:13:00.338 "raid_level": "raid1", 00:13:00.338 "superblock": true, 00:13:00.338 "num_base_bdevs": 4, 00:13:00.338 "num_base_bdevs_discovered": 2, 00:13:00.338 "num_base_bdevs_operational": 2, 00:13:00.338 "base_bdevs_list": [ 00:13:00.338 { 00:13:00.338 "name": null, 00:13:00.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.338 "is_configured": false, 00:13:00.338 "data_offset": 0, 00:13:00.338 "data_size": 63488 00:13:00.338 }, 00:13:00.338 { 00:13:00.338 "name": null, 00:13:00.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.338 "is_configured": false, 00:13:00.338 "data_offset": 2048, 00:13:00.338 "data_size": 63488 00:13:00.338 }, 00:13:00.338 { 00:13:00.338 "name": "BaseBdev3", 00:13:00.338 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:13:00.338 "is_configured": true, 00:13:00.338 "data_offset": 2048, 00:13:00.338 "data_size": 63488 00:13:00.338 }, 00:13:00.338 { 00:13:00.338 "name": "BaseBdev4", 00:13:00.338 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:13:00.338 "is_configured": true, 00:13:00.338 "data_offset": 2048, 00:13:00.338 "data_size": 63488 00:13:00.338 } 00:13:00.338 ] 00:13:00.338 }' 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.338 01:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.338 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.338 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.338 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.338 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.338 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.338 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.338 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.338 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.339 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.598 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.598 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.598 "name": "raid_bdev1", 00:13:00.598 "uuid": "b986425b-9a02-4a48-8ac7-1cd65460968f", 00:13:00.598 "strip_size_kb": 0, 00:13:00.598 "state": "online", 00:13:00.598 "raid_level": "raid1", 00:13:00.598 "superblock": true, 00:13:00.598 "num_base_bdevs": 4, 00:13:00.598 "num_base_bdevs_discovered": 2, 00:13:00.598 "num_base_bdevs_operational": 2, 00:13:00.598 "base_bdevs_list": [ 00:13:00.598 { 00:13:00.598 "name": null, 00:13:00.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.598 "is_configured": false, 00:13:00.598 "data_offset": 0, 00:13:00.598 "data_size": 63488 00:13:00.598 }, 00:13:00.598 { 00:13:00.598 "name": null, 00:13:00.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.598 "is_configured": false, 00:13:00.598 "data_offset": 2048, 00:13:00.598 "data_size": 63488 00:13:00.598 }, 00:13:00.598 { 00:13:00.598 "name": "BaseBdev3", 00:13:00.598 "uuid": "0da5fa91-6e9a-5f30-a863-43632c73e677", 00:13:00.598 "is_configured": true, 00:13:00.598 "data_offset": 2048, 00:13:00.598 "data_size": 63488 00:13:00.598 }, 00:13:00.598 { 00:13:00.598 "name": "BaseBdev4", 00:13:00.598 "uuid": "6e38a9ba-6897-5a74-a189-e364f0a4f434", 00:13:00.598 "is_configured": true, 00:13:00.598 "data_offset": 2048, 00:13:00.598 "data_size": 63488 00:13:00.598 } 00:13:00.598 ] 00:13:00.598 }' 00:13:00.598 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 89761 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 89761 ']' 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 89761 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89761 00:13:00.599 killing process with pid 89761 00:13:00.599 Received shutdown signal, test time was about 60.000000 seconds 00:13:00.599 00:13:00.599 Latency(us) 00:13:00.599 [2024-10-09T01:32:59.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.599 [2024-10-09T01:32:59.492Z] =================================================================================================================== 00:13:00.599 [2024-10-09T01:32:59.492Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89761' 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 89761 00:13:00.599 [2024-10-09 01:32:59.387247] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.599 [2024-10-09 01:32:59.387376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.599 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 89761 00:13:00.599 [2024-10-09 01:32:59.387454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.599 [2024-10-09 01:32:59.387465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:00.599 [2024-10-09 01:32:59.438401] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.859 01:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:00.859 ************************************ 00:13:00.859 END TEST raid_rebuild_test_sb 00:13:00.859 ************************************ 00:13:00.859 00:13:00.859 real 0m22.835s 00:13:00.859 user 0m27.894s 00:13:00.859 sys 0m3.873s 00:13:00.859 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.859 01:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.859 01:32:59 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:00.859 01:32:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:00.859 01:32:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.859 01:32:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.118 ************************************ 00:13:01.118 START TEST raid_rebuild_test_io 00:13:01.118 ************************************ 00:13:01.118 01:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:13:01.118 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:01.118 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:01.118 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:01.118 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:01.118 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:01.118 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:01.118 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90491 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90491 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 90491 ']' 00:13:01.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.119 01:32:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.119 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:01.119 Zero copy mechanism will not be used. 00:13:01.119 [2024-10-09 01:32:59.877891] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:01.119 [2024-10-09 01:32:59.878032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90491 ] 00:13:01.379 [2024-10-09 01:33:00.012250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:01.379 [2024-10-09 01:33:00.040647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.379 [2024-10-09 01:33:00.088507] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.379 [2024-10-09 01:33:00.132570] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.379 [2024-10-09 01:33:00.132628] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.948 BaseBdev1_malloc 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.948 [2024-10-09 01:33:00.700812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:01.948 [2024-10-09 01:33:00.700967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.948 [2024-10-09 01:33:00.701002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:01.948 [2024-10-09 01:33:00.701022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.948 [2024-10-09 01:33:00.703161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.948 [2024-10-09 01:33:00.703207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.948 BaseBdev1 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.948 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 BaseBdev2_malloc 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 [2024-10-09 01:33:00.739540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:01.949 [2024-10-09 01:33:00.739615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.949 [2024-10-09 01:33:00.739638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:01.949 [2024-10-09 01:33:00.739651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.949 [2024-10-09 01:33:00.741725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.949 [2024-10-09 01:33:00.741768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.949 BaseBdev2 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 BaseBdev3_malloc 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 [2024-10-09 01:33:00.768462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:01.949 [2024-10-09 01:33:00.768552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.949 [2024-10-09 01:33:00.768584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:01.949 [2024-10-09 01:33:00.768598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.949 [2024-10-09 01:33:00.770652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.949 [2024-10-09 01:33:00.770764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:01.949 BaseBdev3 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 BaseBdev4_malloc 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 [2024-10-09 01:33:00.797433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:01.949 [2024-10-09 01:33:00.797503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.949 [2024-10-09 01:33:00.797541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:01.949 [2024-10-09 01:33:00.797555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.949 [2024-10-09 01:33:00.799641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.949 [2024-10-09 01:33:00.799684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:01.949 BaseBdev4 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 spare_malloc 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 spare_delay 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.949 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.949 [2024-10-09 01:33:00.838368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.949 [2024-10-09 01:33:00.838448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.949 [2024-10-09 01:33:00.838471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:01.949 [2024-10-09 01:33:00.838483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.209 [2024-10-09 01:33:00.840617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.209 [2024-10-09 01:33:00.840660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:02.209 spare 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.209 [2024-10-09 01:33:00.850445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.209 [2024-10-09 01:33:00.852384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.209 [2024-10-09 01:33:00.852451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.209 [2024-10-09 01:33:00.852499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:02.209 [2024-10-09 01:33:00.852606] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:02.209 [2024-10-09 01:33:00.852631] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:02.209 [2024-10-09 01:33:00.852893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:02.209 [2024-10-09 01:33:00.853035] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:02.209 [2024-10-09 01:33:00.853046] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:02.209 [2024-10-09 01:33:00.853158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.209 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.209 "name": "raid_bdev1", 00:13:02.209 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:02.209 "strip_size_kb": 0, 00:13:02.210 "state": "online", 00:13:02.210 "raid_level": "raid1", 00:13:02.210 "superblock": false, 00:13:02.210 "num_base_bdevs": 4, 00:13:02.210 "num_base_bdevs_discovered": 4, 00:13:02.210 "num_base_bdevs_operational": 4, 00:13:02.210 "base_bdevs_list": [ 00:13:02.210 { 00:13:02.210 "name": "BaseBdev1", 00:13:02.210 "uuid": "54b92499-1a4c-5a68-8c3a-944ebbf6d18e", 00:13:02.210 "is_configured": true, 00:13:02.210 "data_offset": 0, 00:13:02.210 "data_size": 65536 00:13:02.210 }, 00:13:02.210 { 00:13:02.210 "name": "BaseBdev2", 00:13:02.210 "uuid": "a47862a5-c712-567a-85a2-160e130bda91", 00:13:02.210 "is_configured": true, 00:13:02.210 "data_offset": 0, 00:13:02.210 "data_size": 65536 00:13:02.210 }, 00:13:02.210 { 00:13:02.210 "name": "BaseBdev3", 00:13:02.210 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:02.210 "is_configured": true, 00:13:02.210 "data_offset": 0, 00:13:02.210 "data_size": 65536 00:13:02.210 }, 00:13:02.210 { 00:13:02.210 "name": "BaseBdev4", 00:13:02.210 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:02.210 "is_configured": true, 00:13:02.210 "data_offset": 0, 00:13:02.210 "data_size": 65536 00:13:02.210 } 00:13:02.210 ] 00:13:02.210 }' 00:13:02.210 01:33:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.210 01:33:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.470 [2024-10-09 01:33:01.302830] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.470 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.730 [2024-10-09 01:33:01.402536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.730 "name": "raid_bdev1", 00:13:02.730 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:02.730 "strip_size_kb": 0, 00:13:02.730 "state": "online", 00:13:02.730 "raid_level": "raid1", 00:13:02.730 "superblock": false, 00:13:02.730 "num_base_bdevs": 4, 00:13:02.730 "num_base_bdevs_discovered": 3, 00:13:02.730 "num_base_bdevs_operational": 3, 00:13:02.730 "base_bdevs_list": [ 00:13:02.730 { 00:13:02.730 "name": null, 00:13:02.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.730 "is_configured": false, 00:13:02.730 "data_offset": 0, 00:13:02.730 "data_size": 65536 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "name": "BaseBdev2", 00:13:02.730 "uuid": "a47862a5-c712-567a-85a2-160e130bda91", 00:13:02.730 "is_configured": true, 00:13:02.730 "data_offset": 0, 00:13:02.730 "data_size": 65536 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "name": "BaseBdev3", 00:13:02.730 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:02.730 "is_configured": true, 00:13:02.730 "data_offset": 0, 00:13:02.730 "data_size": 65536 00:13:02.730 }, 00:13:02.730 { 00:13:02.730 "name": "BaseBdev4", 00:13:02.730 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:02.730 "is_configured": true, 00:13:02.730 "data_offset": 0, 00:13:02.730 "data_size": 65536 00:13:02.730 } 00:13:02.730 ] 00:13:02.730 }' 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.730 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.730 [2024-10-09 01:33:01.488675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:02.730 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:02.730 Zero copy mechanism will not be used. 00:13:02.730 Running I/O for 60 seconds... 00:13:02.990 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:02.990 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.990 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.250 [2024-10-09 01:33:01.885298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.250 01:33:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.250 01:33:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:03.250 [2024-10-09 01:33:01.937202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:03.250 [2024-10-09 01:33:01.939411] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.250 [2024-10-09 01:33:02.054505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.250 [2024-10-09 01:33:02.054930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:03.510 [2024-10-09 01:33:02.158733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.510 [2024-10-09 01:33:02.159487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:03.769 183.00 IOPS, 549.00 MiB/s [2024-10-09T01:33:02.662Z] [2024-10-09 01:33:02.506627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:04.029 [2024-10-09 01:33:02.739687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:04.029 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.029 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.029 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.029 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.029 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.289 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.289 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.289 01:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.289 01:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.289 01:33:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.289 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.289 "name": "raid_bdev1", 00:13:04.289 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:04.289 "strip_size_kb": 0, 00:13:04.289 "state": "online", 00:13:04.289 "raid_level": "raid1", 00:13:04.289 "superblock": false, 00:13:04.289 "num_base_bdevs": 4, 00:13:04.289 "num_base_bdevs_discovered": 4, 00:13:04.289 "num_base_bdevs_operational": 4, 00:13:04.289 "process": { 00:13:04.289 "type": "rebuild", 00:13:04.289 "target": "spare", 00:13:04.289 "progress": { 00:13:04.289 "blocks": 10240, 00:13:04.289 "percent": 15 00:13:04.289 } 00:13:04.289 }, 00:13:04.289 "base_bdevs_list": [ 00:13:04.289 { 00:13:04.289 "name": "spare", 00:13:04.289 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:04.289 "is_configured": true, 00:13:04.289 "data_offset": 0, 00:13:04.289 "data_size": 65536 00:13:04.289 }, 00:13:04.289 { 00:13:04.289 "name": "BaseBdev2", 00:13:04.289 "uuid": "a47862a5-c712-567a-85a2-160e130bda91", 00:13:04.289 "is_configured": true, 00:13:04.289 "data_offset": 0, 00:13:04.289 "data_size": 65536 00:13:04.289 }, 00:13:04.289 { 00:13:04.289 "name": "BaseBdev3", 00:13:04.289 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:04.289 "is_configured": true, 00:13:04.289 "data_offset": 0, 00:13:04.290 "data_size": 65536 00:13:04.290 }, 00:13:04.290 { 00:13:04.290 "name": "BaseBdev4", 00:13:04.290 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:04.290 "is_configured": true, 00:13:04.290 "data_offset": 0, 00:13:04.290 "data_size": 65536 00:13:04.290 } 00:13:04.290 ] 00:13:04.290 }' 00:13:04.290 01:33:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.290 [2024-10-09 01:33:03.081683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.290 [2024-10-09 01:33:03.108697] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.290 [2024-10-09 01:33:03.111798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.290 [2024-10-09 01:33:03.111902] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.290 [2024-10-09 01:33:03.111926] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.290 [2024-10-09 01:33:03.127633] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000062f0 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.290 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.550 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.550 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.550 "name": "raid_bdev1", 00:13:04.550 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:04.550 "strip_size_kb": 0, 00:13:04.550 "state": "online", 00:13:04.550 "raid_level": "raid1", 00:13:04.550 "superblock": false, 00:13:04.550 "num_base_bdevs": 4, 00:13:04.550 "num_base_bdevs_discovered": 3, 00:13:04.550 "num_base_bdevs_operational": 3, 00:13:04.550 "base_bdevs_list": [ 00:13:04.550 { 00:13:04.550 "name": null, 00:13:04.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.550 "is_configured": false, 00:13:04.550 "data_offset": 0, 00:13:04.550 "data_size": 65536 00:13:04.550 }, 00:13:04.550 { 00:13:04.550 "name": "BaseBdev2", 00:13:04.550 "uuid": "a47862a5-c712-567a-85a2-160e130bda91", 00:13:04.550 "is_configured": true, 00:13:04.550 "data_offset": 0, 00:13:04.550 "data_size": 65536 00:13:04.550 }, 00:13:04.550 { 00:13:04.550 "name": "BaseBdev3", 00:13:04.550 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:04.550 "is_configured": true, 00:13:04.550 "data_offset": 0, 00:13:04.550 "data_size": 65536 00:13:04.550 }, 00:13:04.550 { 00:13:04.550 "name": "BaseBdev4", 00:13:04.550 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:04.550 "is_configured": true, 00:13:04.550 "data_offset": 0, 00:13:04.550 "data_size": 65536 00:13:04.550 } 00:13:04.550 ] 00:13:04.550 }' 00:13:04.550 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.550 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.810 162.50 IOPS, 487.50 MiB/s [2024-10-09T01:33:03.703Z] 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.810 "name": "raid_bdev1", 00:13:04.810 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:04.810 "strip_size_kb": 0, 00:13:04.810 "state": "online", 00:13:04.810 "raid_level": "raid1", 00:13:04.810 "superblock": false, 00:13:04.810 "num_base_bdevs": 4, 00:13:04.810 "num_base_bdevs_discovered": 3, 00:13:04.810 "num_base_bdevs_operational": 3, 00:13:04.810 "base_bdevs_list": [ 00:13:04.810 { 00:13:04.810 "name": null, 00:13:04.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.810 "is_configured": false, 00:13:04.810 "data_offset": 0, 00:13:04.810 "data_size": 65536 00:13:04.810 }, 00:13:04.810 { 00:13:04.810 "name": "BaseBdev2", 00:13:04.810 "uuid": "a47862a5-c712-567a-85a2-160e130bda91", 00:13:04.810 "is_configured": true, 00:13:04.810 "data_offset": 0, 00:13:04.810 "data_size": 65536 00:13:04.810 }, 00:13:04.810 { 00:13:04.810 "name": "BaseBdev3", 00:13:04.810 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:04.810 "is_configured": true, 00:13:04.810 "data_offset": 0, 00:13:04.810 "data_size": 65536 00:13:04.810 }, 00:13:04.810 { 00:13:04.810 "name": "BaseBdev4", 00:13:04.810 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:04.810 "is_configured": true, 00:13:04.810 "data_offset": 0, 00:13:04.810 "data_size": 65536 00:13:04.810 } 00:13:04.810 ] 00:13:04.810 }' 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.810 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.070 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.070 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.070 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.070 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.070 [2024-10-09 01:33:03.721468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.070 01:33:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.070 01:33:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:05.070 [2024-10-09 01:33:03.756082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:13:05.070 [2024-10-09 01:33:03.758063] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.070 [2024-10-09 01:33:03.872100] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:05.070 [2024-10-09 01:33:03.873357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:05.330 [2024-10-09 01:33:04.089115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:05.330 [2024-10-09 01:33:04.089603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:05.590 [2024-10-09 01:33:04.415006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:05.849 150.00 IOPS, 450.00 MiB/s [2024-10-09T01:33:04.742Z] [2024-10-09 01:33:04.624602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:05.849 [2024-10-09 01:33:04.625252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.109 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.109 "name": "raid_bdev1", 00:13:06.109 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:06.109 "strip_size_kb": 0, 00:13:06.109 "state": "online", 00:13:06.109 "raid_level": "raid1", 00:13:06.109 "superblock": false, 00:13:06.109 "num_base_bdevs": 4, 00:13:06.109 "num_base_bdevs_discovered": 4, 00:13:06.109 "num_base_bdevs_operational": 4, 00:13:06.109 "process": { 00:13:06.109 "type": "rebuild", 00:13:06.109 "target": "spare", 00:13:06.109 "progress": { 00:13:06.109 "blocks": 10240, 00:13:06.109 "percent": 15 00:13:06.109 } 00:13:06.109 }, 00:13:06.109 "base_bdevs_list": [ 00:13:06.109 { 00:13:06.109 "name": "spare", 00:13:06.109 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:06.109 "is_configured": true, 00:13:06.109 "data_offset": 0, 00:13:06.109 "data_size": 65536 00:13:06.109 }, 00:13:06.109 { 00:13:06.109 "name": "BaseBdev2", 00:13:06.109 "uuid": "a47862a5-c712-567a-85a2-160e130bda91", 00:13:06.109 "is_configured": true, 00:13:06.109 "data_offset": 0, 00:13:06.109 "data_size": 65536 00:13:06.109 }, 00:13:06.109 { 00:13:06.109 "name": "BaseBdev3", 00:13:06.109 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:06.109 "is_configured": true, 00:13:06.109 "data_offset": 0, 00:13:06.109 "data_size": 65536 00:13:06.109 }, 00:13:06.109 { 00:13:06.109 "name": "BaseBdev4", 00:13:06.109 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:06.109 "is_configured": true, 00:13:06.109 "data_offset": 0, 00:13:06.109 "data_size": 65536 00:13:06.110 } 00:13:06.110 ] 00:13:06.110 }' 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.110 01:33:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.110 [2024-10-09 01:33:04.888766] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:06.374 [2024-10-09 01:33:05.061344] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:13:06.374 [2024-10-09 01:33:05.061389] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006490 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.374 "name": "raid_bdev1", 00:13:06.374 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:06.374 "strip_size_kb": 0, 00:13:06.374 "state": "online", 00:13:06.374 "raid_level": "raid1", 00:13:06.374 "superblock": false, 00:13:06.374 "num_base_bdevs": 4, 00:13:06.374 "num_base_bdevs_discovered": 3, 00:13:06.374 "num_base_bdevs_operational": 3, 00:13:06.374 "process": { 00:13:06.374 "type": "rebuild", 00:13:06.374 "target": "spare", 00:13:06.374 "progress": { 00:13:06.374 "blocks": 14336, 00:13:06.374 "percent": 21 00:13:06.374 } 00:13:06.374 }, 00:13:06.374 "base_bdevs_list": [ 00:13:06.374 { 00:13:06.374 "name": "spare", 00:13:06.374 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:06.374 "is_configured": true, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 }, 00:13:06.374 { 00:13:06.374 "name": null, 00:13:06.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.374 "is_configured": false, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 }, 00:13:06.374 { 00:13:06.374 "name": "BaseBdev3", 00:13:06.374 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:06.374 "is_configured": true, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 }, 00:13:06.374 { 00:13:06.374 "name": "BaseBdev4", 00:13:06.374 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:06.374 "is_configured": true, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 } 00:13:06.374 ] 00:13:06.374 }' 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.374 [2024-10-09 01:33:05.183800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=401 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.374 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.374 "name": "raid_bdev1", 00:13:06.374 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:06.374 "strip_size_kb": 0, 00:13:06.374 "state": "online", 00:13:06.374 "raid_level": "raid1", 00:13:06.374 "superblock": false, 00:13:06.374 "num_base_bdevs": 4, 00:13:06.374 "num_base_bdevs_discovered": 3, 00:13:06.374 "num_base_bdevs_operational": 3, 00:13:06.374 "process": { 00:13:06.374 "type": "rebuild", 00:13:06.374 "target": "spare", 00:13:06.374 "progress": { 00:13:06.374 "blocks": 16384, 00:13:06.374 "percent": 25 00:13:06.374 } 00:13:06.374 }, 00:13:06.374 "base_bdevs_list": [ 00:13:06.374 { 00:13:06.374 "name": "spare", 00:13:06.374 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:06.374 "is_configured": true, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 }, 00:13:06.374 { 00:13:06.374 "name": null, 00:13:06.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.374 "is_configured": false, 00:13:06.374 "data_offset": 0, 00:13:06.374 "data_size": 65536 00:13:06.374 }, 00:13:06.374 { 00:13:06.374 "name": "BaseBdev3", 00:13:06.374 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:06.374 "is_configured": true, 00:13:06.374 "data_offset": 0, 00:13:06.375 "data_size": 65536 00:13:06.375 }, 00:13:06.375 { 00:13:06.375 "name": "BaseBdev4", 00:13:06.375 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:06.375 "is_configured": true, 00:13:06.375 "data_offset": 0, 00:13:06.375 "data_size": 65536 00:13:06.375 } 00:13:06.375 ] 00:13:06.375 }' 00:13:06.375 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.641 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.641 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.641 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.641 01:33:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:06.901 125.00 IOPS, 375.00 MiB/s [2024-10-09T01:33:05.794Z] [2024-10-09 01:33:05.539805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:07.160 [2024-10-09 01:33:05.966347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:07.160 [2024-10-09 01:33:05.966574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:07.419 [2024-10-09 01:33:06.195842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.679 "name": "raid_bdev1", 00:13:07.679 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:07.679 "strip_size_kb": 0, 00:13:07.679 "state": "online", 00:13:07.679 "raid_level": "raid1", 00:13:07.679 "superblock": false, 00:13:07.679 "num_base_bdevs": 4, 00:13:07.679 "num_base_bdevs_discovered": 3, 00:13:07.679 "num_base_bdevs_operational": 3, 00:13:07.679 "process": { 00:13:07.679 "type": "rebuild", 00:13:07.679 "target": "spare", 00:13:07.679 "progress": { 00:13:07.679 "blocks": 34816, 00:13:07.679 "percent": 53 00:13:07.679 } 00:13:07.679 }, 00:13:07.679 "base_bdevs_list": [ 00:13:07.679 { 00:13:07.679 "name": "spare", 00:13:07.679 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:07.679 "is_configured": true, 00:13:07.679 "data_offset": 0, 00:13:07.679 "data_size": 65536 00:13:07.679 }, 00:13:07.679 { 00:13:07.679 "name": null, 00:13:07.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.679 "is_configured": false, 00:13:07.679 "data_offset": 0, 00:13:07.679 "data_size": 65536 00:13:07.679 }, 00:13:07.679 { 00:13:07.679 "name": "BaseBdev3", 00:13:07.679 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:07.679 "is_configured": true, 00:13:07.679 "data_offset": 0, 00:13:07.679 "data_size": 65536 00:13:07.679 }, 00:13:07.679 { 00:13:07.679 "name": "BaseBdev4", 00:13:07.679 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:07.679 "is_configured": true, 00:13:07.679 "data_offset": 0, 00:13:07.679 "data_size": 65536 00:13:07.679 } 00:13:07.679 ] 00:13:07.679 }' 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.679 01:33:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.679 114.60 IOPS, 343.80 MiB/s [2024-10-09T01:33:06.572Z] [2024-10-09 01:33:06.548631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:07.940 [2024-10-09 01:33:06.655264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:08.200 [2024-10-09 01:33:06.987825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:08.459 [2024-10-09 01:33:07.094687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:08.719 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.719 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.719 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.719 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.719 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.719 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.720 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.720 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.720 01:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.720 01:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.720 101.83 IOPS, 305.50 MiB/s [2024-10-09T01:33:07.613Z] 01:33:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.720 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.720 "name": "raid_bdev1", 00:13:08.720 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:08.720 "strip_size_kb": 0, 00:13:08.720 "state": "online", 00:13:08.720 "raid_level": "raid1", 00:13:08.720 "superblock": false, 00:13:08.720 "num_base_bdevs": 4, 00:13:08.720 "num_base_bdevs_discovered": 3, 00:13:08.720 "num_base_bdevs_operational": 3, 00:13:08.720 "process": { 00:13:08.720 "type": "rebuild", 00:13:08.720 "target": "spare", 00:13:08.720 "progress": { 00:13:08.720 "blocks": 51200, 00:13:08.720 "percent": 78 00:13:08.720 } 00:13:08.720 }, 00:13:08.720 "base_bdevs_list": [ 00:13:08.720 { 00:13:08.720 "name": "spare", 00:13:08.720 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:08.720 "is_configured": true, 00:13:08.720 "data_offset": 0, 00:13:08.720 "data_size": 65536 00:13:08.720 }, 00:13:08.720 { 00:13:08.720 "name": null, 00:13:08.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.720 "is_configured": false, 00:13:08.720 "data_offset": 0, 00:13:08.720 "data_size": 65536 00:13:08.720 }, 00:13:08.720 { 00:13:08.720 "name": "BaseBdev3", 00:13:08.720 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:08.720 "is_configured": true, 00:13:08.720 "data_offset": 0, 00:13:08.720 "data_size": 65536 00:13:08.720 }, 00:13:08.720 { 00:13:08.720 "name": "BaseBdev4", 00:13:08.720 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:08.720 "is_configured": true, 00:13:08.720 "data_offset": 0, 00:13:08.720 "data_size": 65536 00:13:08.720 } 00:13:08.720 ] 00:13:08.720 }' 00:13:08.720 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.720 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.720 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.980 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.980 01:33:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.550 [2024-10-09 01:33:08.179143] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:09.550 [2024-10-09 01:33:08.284081] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:09.550 [2024-10-09 01:33:08.287473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.810 93.29 IOPS, 279.86 MiB/s [2024-10-09T01:33:08.703Z] 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.810 "name": "raid_bdev1", 00:13:09.810 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:09.810 "strip_size_kb": 0, 00:13:09.810 "state": "online", 00:13:09.810 "raid_level": "raid1", 00:13:09.810 "superblock": false, 00:13:09.810 "num_base_bdevs": 4, 00:13:09.810 "num_base_bdevs_discovered": 3, 00:13:09.810 "num_base_bdevs_operational": 3, 00:13:09.810 "base_bdevs_list": [ 00:13:09.810 { 00:13:09.810 "name": "spare", 00:13:09.810 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:09.810 "is_configured": true, 00:13:09.810 "data_offset": 0, 00:13:09.810 "data_size": 65536 00:13:09.810 }, 00:13:09.810 { 00:13:09.810 "name": null, 00:13:09.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.810 "is_configured": false, 00:13:09.810 "data_offset": 0, 00:13:09.810 "data_size": 65536 00:13:09.810 }, 00:13:09.810 { 00:13:09.810 "name": "BaseBdev3", 00:13:09.810 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:09.810 "is_configured": true, 00:13:09.810 "data_offset": 0, 00:13:09.810 "data_size": 65536 00:13:09.810 }, 00:13:09.810 { 00:13:09.810 "name": "BaseBdev4", 00:13:09.810 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:09.810 "is_configured": true, 00:13:09.810 "data_offset": 0, 00:13:09.810 "data_size": 65536 00:13:09.810 } 00:13:09.810 ] 00:13:09.810 }' 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:09.810 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.070 "name": "raid_bdev1", 00:13:10.070 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:10.070 "strip_size_kb": 0, 00:13:10.070 "state": "online", 00:13:10.070 "raid_level": "raid1", 00:13:10.070 "superblock": false, 00:13:10.070 "num_base_bdevs": 4, 00:13:10.070 "num_base_bdevs_discovered": 3, 00:13:10.070 "num_base_bdevs_operational": 3, 00:13:10.070 "base_bdevs_list": [ 00:13:10.070 { 00:13:10.070 "name": "spare", 00:13:10.070 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:10.070 "is_configured": true, 00:13:10.070 "data_offset": 0, 00:13:10.070 "data_size": 65536 00:13:10.070 }, 00:13:10.070 { 00:13:10.070 "name": null, 00:13:10.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.070 "is_configured": false, 00:13:10.070 "data_offset": 0, 00:13:10.070 "data_size": 65536 00:13:10.070 }, 00:13:10.070 { 00:13:10.070 "name": "BaseBdev3", 00:13:10.070 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:10.070 "is_configured": true, 00:13:10.070 "data_offset": 0, 00:13:10.070 "data_size": 65536 00:13:10.070 }, 00:13:10.070 { 00:13:10.070 "name": "BaseBdev4", 00:13:10.070 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:10.070 "is_configured": true, 00:13:10.070 "data_offset": 0, 00:13:10.070 "data_size": 65536 00:13:10.070 } 00:13:10.070 ] 00:13:10.070 }' 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.070 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.071 "name": "raid_bdev1", 00:13:10.071 "uuid": "929394b5-11a2-4512-aec3-4b2d1249f716", 00:13:10.071 "strip_size_kb": 0, 00:13:10.071 "state": "online", 00:13:10.071 "raid_level": "raid1", 00:13:10.071 "superblock": false, 00:13:10.071 "num_base_bdevs": 4, 00:13:10.071 "num_base_bdevs_discovered": 3, 00:13:10.071 "num_base_bdevs_operational": 3, 00:13:10.071 "base_bdevs_list": [ 00:13:10.071 { 00:13:10.071 "name": "spare", 00:13:10.071 "uuid": "508fd13d-670a-57b3-bc51-004d52e00cfc", 00:13:10.071 "is_configured": true, 00:13:10.071 "data_offset": 0, 00:13:10.071 "data_size": 65536 00:13:10.071 }, 00:13:10.071 { 00:13:10.071 "name": null, 00:13:10.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.071 "is_configured": false, 00:13:10.071 "data_offset": 0, 00:13:10.071 "data_size": 65536 00:13:10.071 }, 00:13:10.071 { 00:13:10.071 "name": "BaseBdev3", 00:13:10.071 "uuid": "57515727-3e7f-56a9-b171-e076ab70aca0", 00:13:10.071 "is_configured": true, 00:13:10.071 "data_offset": 0, 00:13:10.071 "data_size": 65536 00:13:10.071 }, 00:13:10.071 { 00:13:10.071 "name": "BaseBdev4", 00:13:10.071 "uuid": "3e40eb99-b472-5084-890c-fe9721ecb8cb", 00:13:10.071 "is_configured": true, 00:13:10.071 "data_offset": 0, 00:13:10.071 "data_size": 65536 00:13:10.071 } 00:13:10.071 ] 00:13:10.071 }' 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.071 01:33:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.641 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.641 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.641 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.641 [2024-10-09 01:33:09.359711] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.641 [2024-10-09 01:33:09.359834] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.641 00:13:10.641 Latency(us) 00:13:10.641 [2024-10-09T01:33:09.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.641 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:10.641 raid_bdev1 : 7.97 85.34 256.02 0.00 0.00 16415.85 283.82 111959.00 00:13:10.641 [2024-10-09T01:33:09.534Z] =================================================================================================================== 00:13:10.641 [2024-10-09T01:33:09.534Z] Total : 85.34 256.02 0.00 0.00 16415.85 283.82 111959.00 00:13:10.641 [2024-10-09 01:33:09.462800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.641 [2024-10-09 01:33:09.462894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.642 [2024-10-09 01:33:09.463035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.642 [2024-10-09 01:33:09.463096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:10.642 { 00:13:10.642 "results": [ 00:13:10.642 { 00:13:10.642 "job": "raid_bdev1", 00:13:10.642 "core_mask": "0x1", 00:13:10.642 "workload": "randrw", 00:13:10.642 "percentage": 50, 00:13:10.642 "status": "finished", 00:13:10.642 "queue_depth": 2, 00:13:10.642 "io_size": 3145728, 00:13:10.642 "runtime": 7.968187, 00:13:10.642 "iops": 85.3393626429701, 00:13:10.642 "mibps": 256.0180879289103, 00:13:10.642 "io_failed": 0, 00:13:10.642 "io_timeout": 0, 00:13:10.642 "avg_latency_us": 16415.854143487777, 00:13:10.642 "min_latency_us": 283.82463174409486, 00:13:10.642 "max_latency_us": 111958.99938987187 00:13:10.642 } 00:13:10.642 ], 00:13:10.642 "core_count": 1 00:13:10.642 } 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.642 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:10.901 /dev/nbd0 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.901 1+0 records in 00:13:10.901 1+0 records out 00:13:10.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580003 s, 7.1 MB/s 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.901 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:10.902 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.902 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:10.902 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.902 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.902 01:33:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:11.161 /dev/nbd1 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.161 1+0 records in 00:13:11.161 1+0 records out 00:13:11.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468811 s, 8.7 MB/s 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.161 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:11.420 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:11.420 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.420 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:11.420 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.420 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:11.420 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.420 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:11.679 /dev/nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:11.679 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.939 1+0 records in 00:13:11.939 1+0 records out 00:13:11.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396019 s, 10.3 MB/s 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.939 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.199 01:33:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 90491 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 90491 ']' 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 90491 00:13:12.199 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:12.459 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.459 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90491 00:13:12.459 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:12.459 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:12.459 killing process with pid 90491 00:13:12.459 Received shutdown signal, test time was about 9.635467 seconds 00:13:12.459 00:13:12.459 Latency(us) 00:13:12.459 [2024-10-09T01:33:11.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.459 [2024-10-09T01:33:11.352Z] =================================================================================================================== 00:13:12.459 [2024-10-09T01:33:11.352Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.459 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90491' 00:13:12.459 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 90491 00:13:12.459 [2024-10-09 01:33:11.127061] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.459 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 90491 00:13:12.459 [2024-10-09 01:33:11.209458] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:12.719 ************************************ 00:13:12.719 END TEST raid_rebuild_test_io 00:13:12.719 ************************************ 00:13:12.719 01:33:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:12.719 00:13:12.719 real 0m11.818s 00:13:12.719 user 0m15.214s 00:13:12.719 sys 0m1.838s 00:13:12.719 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.719 01:33:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.980 01:33:11 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:12.980 01:33:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:12.980 01:33:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.980 01:33:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:12.980 ************************************ 00:13:12.980 START TEST raid_rebuild_test_sb_io 00:13:12.980 ************************************ 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90890 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90890 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 90890 ']' 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:12.980 01:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.980 [2024-10-09 01:33:11.761350] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:12.980 [2024-10-09 01:33:11.761563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90890 ] 00:13:12.980 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:12.980 Zero copy mechanism will not be used. 00:13:13.240 [2024-10-09 01:33:11.893902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:13.240 [2024-10-09 01:33:11.921853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.240 [2024-10-09 01:33:11.991535] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.240 [2024-10-09 01:33:12.066720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.240 [2024-10-09 01:33:12.066847] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.808 BaseBdev1_malloc 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.808 [2024-10-09 01:33:12.601393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:13.808 [2024-10-09 01:33:12.601466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.808 [2024-10-09 01:33:12.601506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:13.808 [2024-10-09 01:33:12.601538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.808 [2024-10-09 01:33:12.603961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.808 [2024-10-09 01:33:12.603997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:13.808 BaseBdev1 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.808 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.809 BaseBdev2_malloc 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.809 [2024-10-09 01:33:12.665779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:13.809 [2024-10-09 01:33:12.665984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.809 [2024-10-09 01:33:12.666038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:13.809 [2024-10-09 01:33:12.666066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.809 [2024-10-09 01:33:12.671187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.809 [2024-10-09 01:33:12.671264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:13.809 BaseBdev2 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.809 BaseBdev3_malloc 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.809 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 [2024-10-09 01:33:12.704347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:14.069 [2024-10-09 01:33:12.704403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.069 [2024-10-09 01:33:12.704428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:14.069 [2024-10-09 01:33:12.704440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.069 [2024-10-09 01:33:12.706895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.069 [2024-10-09 01:33:12.706936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:14.069 BaseBdev3 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 BaseBdev4_malloc 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 [2024-10-09 01:33:12.739297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:14.069 [2024-10-09 01:33:12.739355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.069 [2024-10-09 01:33:12.739376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:14.069 [2024-10-09 01:33:12.739387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.069 [2024-10-09 01:33:12.741705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.069 [2024-10-09 01:33:12.741739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:14.069 BaseBdev4 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 spare_malloc 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 spare_delay 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 [2024-10-09 01:33:12.785649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:14.069 [2024-10-09 01:33:12.785701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.069 [2024-10-09 01:33:12.785720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:14.069 [2024-10-09 01:33:12.785730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.069 [2024-10-09 01:33:12.788006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.069 [2024-10-09 01:33:12.788044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:14.069 spare 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 [2024-10-09 01:33:12.797748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.069 [2024-10-09 01:33:12.799757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.069 [2024-10-09 01:33:12.799823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.069 [2024-10-09 01:33:12.799870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.069 [2024-10-09 01:33:12.800037] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:14.069 [2024-10-09 01:33:12.800055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:14.069 [2024-10-09 01:33:12.800314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:14.069 [2024-10-09 01:33:12.800461] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:14.069 [2024-10-09 01:33:12.800476] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:14.069 [2024-10-09 01:33:12.800627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.069 "name": "raid_bdev1", 00:13:14.069 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:14.069 "strip_size_kb": 0, 00:13:14.069 "state": "online", 00:13:14.069 "raid_level": "raid1", 00:13:14.069 "superblock": true, 00:13:14.069 "num_base_bdevs": 4, 00:13:14.069 "num_base_bdevs_discovered": 4, 00:13:14.069 "num_base_bdevs_operational": 4, 00:13:14.069 "base_bdevs_list": [ 00:13:14.069 { 00:13:14.069 "name": "BaseBdev1", 00:13:14.069 "uuid": "94ffda02-cc04-523f-a226-5b511cd79f85", 00:13:14.069 "is_configured": true, 00:13:14.069 "data_offset": 2048, 00:13:14.069 "data_size": 63488 00:13:14.069 }, 00:13:14.069 { 00:13:14.069 "name": "BaseBdev2", 00:13:14.069 "uuid": "1883a3d8-568e-5e39-a394-f588619ea80e", 00:13:14.069 "is_configured": true, 00:13:14.069 "data_offset": 2048, 00:13:14.069 "data_size": 63488 00:13:14.069 }, 00:13:14.069 { 00:13:14.069 "name": "BaseBdev3", 00:13:14.069 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:14.069 "is_configured": true, 00:13:14.069 "data_offset": 2048, 00:13:14.069 "data_size": 63488 00:13:14.069 }, 00:13:14.069 { 00:13:14.069 "name": "BaseBdev4", 00:13:14.069 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:14.069 "is_configured": true, 00:13:14.069 "data_offset": 2048, 00:13:14.069 "data_size": 63488 00:13:14.069 } 00:13:14.069 ] 00:13:14.069 }' 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.069 01:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.638 [2024-10-09 01:33:13.238125] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.638 [2024-10-09 01:33:13.333808] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.638 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.639 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.639 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.639 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.639 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.639 "name": "raid_bdev1", 00:13:14.639 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:14.639 "strip_size_kb": 0, 00:13:14.639 "state": "online", 00:13:14.639 "raid_level": "raid1", 00:13:14.639 "superblock": true, 00:13:14.639 "num_base_bdevs": 4, 00:13:14.639 "num_base_bdevs_discovered": 3, 00:13:14.639 "num_base_bdevs_operational": 3, 00:13:14.639 "base_bdevs_list": [ 00:13:14.639 { 00:13:14.639 "name": null, 00:13:14.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.639 "is_configured": false, 00:13:14.639 "data_offset": 0, 00:13:14.639 "data_size": 63488 00:13:14.639 }, 00:13:14.639 { 00:13:14.639 "name": "BaseBdev2", 00:13:14.639 "uuid": "1883a3d8-568e-5e39-a394-f588619ea80e", 00:13:14.639 "is_configured": true, 00:13:14.639 "data_offset": 2048, 00:13:14.639 "data_size": 63488 00:13:14.639 }, 00:13:14.639 { 00:13:14.639 "name": "BaseBdev3", 00:13:14.639 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:14.639 "is_configured": true, 00:13:14.639 "data_offset": 2048, 00:13:14.639 "data_size": 63488 00:13:14.639 }, 00:13:14.639 { 00:13:14.639 "name": "BaseBdev4", 00:13:14.639 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:14.639 "is_configured": true, 00:13:14.639 "data_offset": 2048, 00:13:14.639 "data_size": 63488 00:13:14.639 } 00:13:14.639 ] 00:13:14.639 }' 00:13:14.639 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.639 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.639 [2024-10-09 01:33:13.425191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:14.639 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.639 Zero copy mechanism will not be used. 00:13:14.639 Running I/O for 60 seconds... 00:13:14.899 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:14.899 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.899 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.899 [2024-10-09 01:33:13.753361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.899 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.899 01:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:15.158 [2024-10-09 01:33:13.815040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:15.158 [2024-10-09 01:33:13.817387] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:15.417 [2024-10-09 01:33:14.109414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:15.417 [2024-10-09 01:33:14.109739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:15.677 158.00 IOPS, 474.00 MiB/s [2024-10-09T01:33:14.570Z] [2024-10-09 01:33:14.463448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:15.936 [2024-10-09 01:33:14.606002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.936 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.196 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.196 "name": "raid_bdev1", 00:13:16.196 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:16.196 "strip_size_kb": 0, 00:13:16.196 "state": "online", 00:13:16.196 "raid_level": "raid1", 00:13:16.196 "superblock": true, 00:13:16.196 "num_base_bdevs": 4, 00:13:16.196 "num_base_bdevs_discovered": 4, 00:13:16.196 "num_base_bdevs_operational": 4, 00:13:16.196 "process": { 00:13:16.196 "type": "rebuild", 00:13:16.196 "target": "spare", 00:13:16.196 "progress": { 00:13:16.196 "blocks": 12288, 00:13:16.196 "percent": 19 00:13:16.196 } 00:13:16.196 }, 00:13:16.196 "base_bdevs_list": [ 00:13:16.196 { 00:13:16.196 "name": "spare", 00:13:16.196 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:16.196 "is_configured": true, 00:13:16.196 "data_offset": 2048, 00:13:16.196 "data_size": 63488 00:13:16.196 }, 00:13:16.196 { 00:13:16.196 "name": "BaseBdev2", 00:13:16.196 "uuid": "1883a3d8-568e-5e39-a394-f588619ea80e", 00:13:16.196 "is_configured": true, 00:13:16.196 "data_offset": 2048, 00:13:16.196 "data_size": 63488 00:13:16.196 }, 00:13:16.196 { 00:13:16.196 "name": "BaseBdev3", 00:13:16.196 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:16.196 "is_configured": true, 00:13:16.196 "data_offset": 2048, 00:13:16.196 "data_size": 63488 00:13:16.196 }, 00:13:16.196 { 00:13:16.196 "name": "BaseBdev4", 00:13:16.196 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:16.196 "is_configured": true, 00:13:16.196 "data_offset": 2048, 00:13:16.196 "data_size": 63488 00:13:16.196 } 00:13:16.196 ] 00:13:16.196 }' 00:13:16.196 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.196 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.196 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.196 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.196 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:16.196 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.196 01:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.196 [2024-10-09 01:33:14.926362] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.196 [2024-10-09 01:33:14.953829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:16.196 [2024-10-09 01:33:14.954881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:16.196 [2024-10-09 01:33:15.064180] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:16.196 [2024-10-09 01:33:15.076378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.196 [2024-10-09 01:33:15.076458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.196 [2024-10-09 01:33:15.076507] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:16.456 [2024-10-09 01:33:15.100938] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000062f0 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.456 "name": "raid_bdev1", 00:13:16.456 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:16.456 "strip_size_kb": 0, 00:13:16.456 "state": "online", 00:13:16.456 "raid_level": "raid1", 00:13:16.456 "superblock": true, 00:13:16.456 "num_base_bdevs": 4, 00:13:16.456 "num_base_bdevs_discovered": 3, 00:13:16.456 "num_base_bdevs_operational": 3, 00:13:16.456 "base_bdevs_list": [ 00:13:16.456 { 00:13:16.456 "name": null, 00:13:16.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.456 "is_configured": false, 00:13:16.456 "data_offset": 0, 00:13:16.456 "data_size": 63488 00:13:16.456 }, 00:13:16.456 { 00:13:16.456 "name": "BaseBdev2", 00:13:16.456 "uuid": "1883a3d8-568e-5e39-a394-f588619ea80e", 00:13:16.456 "is_configured": true, 00:13:16.456 "data_offset": 2048, 00:13:16.456 "data_size": 63488 00:13:16.456 }, 00:13:16.456 { 00:13:16.456 "name": "BaseBdev3", 00:13:16.456 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:16.456 "is_configured": true, 00:13:16.456 "data_offset": 2048, 00:13:16.456 "data_size": 63488 00:13:16.456 }, 00:13:16.456 { 00:13:16.456 "name": "BaseBdev4", 00:13:16.456 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:16.456 "is_configured": true, 00:13:16.456 "data_offset": 2048, 00:13:16.456 "data_size": 63488 00:13:16.456 } 00:13:16.456 ] 00:13:16.456 }' 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.456 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.715 139.00 IOPS, 417.00 MiB/s [2024-10-09T01:33:15.608Z] 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.715 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.715 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.715 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.715 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.715 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.716 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.716 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.716 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.975 "name": "raid_bdev1", 00:13:16.975 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:16.975 "strip_size_kb": 0, 00:13:16.975 "state": "online", 00:13:16.975 "raid_level": "raid1", 00:13:16.975 "superblock": true, 00:13:16.975 "num_base_bdevs": 4, 00:13:16.975 "num_base_bdevs_discovered": 3, 00:13:16.975 "num_base_bdevs_operational": 3, 00:13:16.975 "base_bdevs_list": [ 00:13:16.975 { 00:13:16.975 "name": null, 00:13:16.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.975 "is_configured": false, 00:13:16.975 "data_offset": 0, 00:13:16.975 "data_size": 63488 00:13:16.975 }, 00:13:16.975 { 00:13:16.975 "name": "BaseBdev2", 00:13:16.975 "uuid": "1883a3d8-568e-5e39-a394-f588619ea80e", 00:13:16.975 "is_configured": true, 00:13:16.975 "data_offset": 2048, 00:13:16.975 "data_size": 63488 00:13:16.975 }, 00:13:16.975 { 00:13:16.975 "name": "BaseBdev3", 00:13:16.975 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:16.975 "is_configured": true, 00:13:16.975 "data_offset": 2048, 00:13:16.975 "data_size": 63488 00:13:16.975 }, 00:13:16.975 { 00:13:16.975 "name": "BaseBdev4", 00:13:16.975 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:16.975 "is_configured": true, 00:13:16.975 "data_offset": 2048, 00:13:16.975 "data_size": 63488 00:13:16.975 } 00:13:16.975 ] 00:13:16.975 }' 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.975 [2024-10-09 01:33:15.743633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.975 01:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:16.975 [2024-10-09 01:33:15.826237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:13:16.975 [2024-10-09 01:33:15.828625] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.238 [2024-10-09 01:33:15.939312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.238 [2024-10-09 01:33:15.941365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.497 [2024-10-09 01:33:16.161527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.497 [2024-10-09 01:33:16.162044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.757 [2024-10-09 01:33:16.424656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.757 141.67 IOPS, 425.00 MiB/s [2024-10-09T01:33:16.650Z] [2024-10-09 01:33:16.539497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.757 [2024-10-09 01:33:16.539966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:18.016 [2024-10-09 01:33:16.771433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.016 "name": "raid_bdev1", 00:13:18.016 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:18.016 "strip_size_kb": 0, 00:13:18.016 "state": "online", 00:13:18.016 "raid_level": "raid1", 00:13:18.016 "superblock": true, 00:13:18.016 "num_base_bdevs": 4, 00:13:18.016 "num_base_bdevs_discovered": 4, 00:13:18.016 "num_base_bdevs_operational": 4, 00:13:18.016 "process": { 00:13:18.016 "type": "rebuild", 00:13:18.016 "target": "spare", 00:13:18.016 "progress": { 00:13:18.016 "blocks": 14336, 00:13:18.016 "percent": 22 00:13:18.016 } 00:13:18.016 }, 00:13:18.016 "base_bdevs_list": [ 00:13:18.016 { 00:13:18.016 "name": "spare", 00:13:18.016 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:18.016 "is_configured": true, 00:13:18.016 "data_offset": 2048, 00:13:18.016 "data_size": 63488 00:13:18.016 }, 00:13:18.016 { 00:13:18.016 "name": "BaseBdev2", 00:13:18.016 "uuid": "1883a3d8-568e-5e39-a394-f588619ea80e", 00:13:18.016 "is_configured": true, 00:13:18.016 "data_offset": 2048, 00:13:18.016 "data_size": 63488 00:13:18.016 }, 00:13:18.016 { 00:13:18.016 "name": "BaseBdev3", 00:13:18.016 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:18.016 "is_configured": true, 00:13:18.016 "data_offset": 2048, 00:13:18.016 "data_size": 63488 00:13:18.016 }, 00:13:18.016 { 00:13:18.016 "name": "BaseBdev4", 00:13:18.016 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:18.016 "is_configured": true, 00:13:18.016 "data_offset": 2048, 00:13:18.016 "data_size": 63488 00:13:18.016 } 00:13:18.016 ] 00:13:18.016 }' 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.016 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.017 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:18.276 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.276 01:33:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.276 [2024-10-09 01:33:16.944340] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.276 [2024-10-09 01:33:17.009689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:18.536 [2024-10-09 01:33:17.214355] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:13:18.536 [2024-10-09 01:33:17.214435] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006490 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.536 "name": "raid_bdev1", 00:13:18.536 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:18.536 "strip_size_kb": 0, 00:13:18.536 "state": "online", 00:13:18.536 "raid_level": "raid1", 00:13:18.536 "superblock": true, 00:13:18.536 "num_base_bdevs": 4, 00:13:18.536 "num_base_bdevs_discovered": 3, 00:13:18.536 "num_base_bdevs_operational": 3, 00:13:18.536 "process": { 00:13:18.536 "type": "rebuild", 00:13:18.536 "target": "spare", 00:13:18.536 "progress": { 00:13:18.536 "blocks": 16384, 00:13:18.536 "percent": 25 00:13:18.536 } 00:13:18.536 }, 00:13:18.536 "base_bdevs_list": [ 00:13:18.536 { 00:13:18.536 "name": "spare", 00:13:18.536 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:18.536 "is_configured": true, 00:13:18.536 "data_offset": 2048, 00:13:18.536 "data_size": 63488 00:13:18.536 }, 00:13:18.536 { 00:13:18.536 "name": null, 00:13:18.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.536 "is_configured": false, 00:13:18.536 "data_offset": 0, 00:13:18.536 "data_size": 63488 00:13:18.536 }, 00:13:18.536 { 00:13:18.536 "name": "BaseBdev3", 00:13:18.536 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:18.536 "is_configured": true, 00:13:18.536 "data_offset": 2048, 00:13:18.536 "data_size": 63488 00:13:18.536 }, 00:13:18.536 { 00:13:18.536 "name": "BaseBdev4", 00:13:18.536 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:18.536 "is_configured": true, 00:13:18.536 "data_offset": 2048, 00:13:18.536 "data_size": 63488 00:13:18.536 } 00:13:18.536 ] 00:13:18.536 }' 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=413 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.536 "name": "raid_bdev1", 00:13:18.536 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:18.536 "strip_size_kb": 0, 00:13:18.536 "state": "online", 00:13:18.536 "raid_level": "raid1", 00:13:18.536 "superblock": true, 00:13:18.536 "num_base_bdevs": 4, 00:13:18.536 "num_base_bdevs_discovered": 3, 00:13:18.536 "num_base_bdevs_operational": 3, 00:13:18.536 "process": { 00:13:18.536 "type": "rebuild", 00:13:18.536 "target": "spare", 00:13:18.536 "progress": { 00:13:18.536 "blocks": 18432, 00:13:18.536 "percent": 29 00:13:18.536 } 00:13:18.536 }, 00:13:18.536 "base_bdevs_list": [ 00:13:18.536 { 00:13:18.536 "name": "spare", 00:13:18.536 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:18.536 "is_configured": true, 00:13:18.536 "data_offset": 2048, 00:13:18.536 "data_size": 63488 00:13:18.536 }, 00:13:18.536 { 00:13:18.536 "name": null, 00:13:18.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.536 "is_configured": false, 00:13:18.536 "data_offset": 0, 00:13:18.536 "data_size": 63488 00:13:18.536 }, 00:13:18.536 { 00:13:18.536 "name": "BaseBdev3", 00:13:18.536 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:18.536 "is_configured": true, 00:13:18.536 "data_offset": 2048, 00:13:18.536 "data_size": 63488 00:13:18.536 }, 00:13:18.536 { 00:13:18.536 "name": "BaseBdev4", 00:13:18.536 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:18.536 "is_configured": true, 00:13:18.536 "data_offset": 2048, 00:13:18.536 "data_size": 63488 00:13:18.536 } 00:13:18.536 ] 00:13:18.536 }' 00:13:18.536 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.795 122.25 IOPS, 366.75 MiB/s [2024-10-09T01:33:17.688Z] 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.796 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.796 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.796 01:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.796 [2024-10-09 01:33:17.542376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:19.055 [2024-10-09 01:33:17.877322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:19.314 [2024-10-09 01:33:18.122682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:19.833 104.20 IOPS, 312.60 MiB/s [2024-10-09T01:33:18.726Z] 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.833 "name": "raid_bdev1", 00:13:19.833 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:19.833 "strip_size_kb": 0, 00:13:19.833 "state": "online", 00:13:19.833 "raid_level": "raid1", 00:13:19.833 "superblock": true, 00:13:19.833 "num_base_bdevs": 4, 00:13:19.833 "num_base_bdevs_discovered": 3, 00:13:19.833 "num_base_bdevs_operational": 3, 00:13:19.833 "process": { 00:13:19.833 "type": "rebuild", 00:13:19.833 "target": "spare", 00:13:19.833 "progress": { 00:13:19.833 "blocks": 32768, 00:13:19.833 "percent": 51 00:13:19.833 } 00:13:19.833 }, 00:13:19.833 "base_bdevs_list": [ 00:13:19.833 { 00:13:19.833 "name": "spare", 00:13:19.833 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:19.833 "is_configured": true, 00:13:19.833 "data_offset": 2048, 00:13:19.833 "data_size": 63488 00:13:19.833 }, 00:13:19.833 { 00:13:19.833 "name": null, 00:13:19.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.833 "is_configured": false, 00:13:19.833 "data_offset": 0, 00:13:19.833 "data_size": 63488 00:13:19.833 }, 00:13:19.833 { 00:13:19.833 "name": "BaseBdev3", 00:13:19.833 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:19.833 "is_configured": true, 00:13:19.833 "data_offset": 2048, 00:13:19.833 "data_size": 63488 00:13:19.833 }, 00:13:19.833 { 00:13:19.833 "name": "BaseBdev4", 00:13:19.833 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:19.833 "is_configured": true, 00:13:19.833 "data_offset": 2048, 00:13:19.833 "data_size": 63488 00:13:19.833 } 00:13:19.833 ] 00:13:19.833 }' 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.833 [2024-10-09 01:33:18.595850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.833 01:33:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.402 [2024-10-09 01:33:19.258632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:20.921 93.00 IOPS, 279.00 MiB/s [2024-10-09T01:33:19.814Z] [2024-10-09 01:33:19.610563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.921 "name": "raid_bdev1", 00:13:20.921 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:20.921 "strip_size_kb": 0, 00:13:20.921 "state": "online", 00:13:20.921 "raid_level": "raid1", 00:13:20.921 "superblock": true, 00:13:20.921 "num_base_bdevs": 4, 00:13:20.921 "num_base_bdevs_discovered": 3, 00:13:20.921 "num_base_bdevs_operational": 3, 00:13:20.921 "process": { 00:13:20.921 "type": "rebuild", 00:13:20.921 "target": "spare", 00:13:20.921 "progress": { 00:13:20.921 "blocks": 53248, 00:13:20.921 "percent": 83 00:13:20.921 } 00:13:20.921 }, 00:13:20.921 "base_bdevs_list": [ 00:13:20.921 { 00:13:20.921 "name": "spare", 00:13:20.921 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:20.921 "is_configured": true, 00:13:20.921 "data_offset": 2048, 00:13:20.921 "data_size": 63488 00:13:20.921 }, 00:13:20.921 { 00:13:20.921 "name": null, 00:13:20.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.921 "is_configured": false, 00:13:20.921 "data_offset": 0, 00:13:20.921 "data_size": 63488 00:13:20.921 }, 00:13:20.921 { 00:13:20.921 "name": "BaseBdev3", 00:13:20.921 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:20.921 "is_configured": true, 00:13:20.921 "data_offset": 2048, 00:13:20.921 "data_size": 63488 00:13:20.921 }, 00:13:20.921 { 00:13:20.921 "name": "BaseBdev4", 00:13:20.921 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:20.921 "is_configured": true, 00:13:20.921 "data_offset": 2048, 00:13:20.921 "data_size": 63488 00:13:20.921 } 00:13:20.921 ] 00:13:20.921 }' 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.921 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.181 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.181 01:33:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.181 [2024-10-09 01:33:20.057750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:21.181 [2024-10-09 01:33:20.057989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:21.750 [2024-10-09 01:33:20.392760] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.750 84.57 IOPS, 253.71 MiB/s [2024-10-09T01:33:20.643Z] [2024-10-09 01:33:20.492761] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.750 [2024-10-09 01:33:20.502869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.009 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.009 "name": "raid_bdev1", 00:13:22.009 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:22.009 "strip_size_kb": 0, 00:13:22.009 "state": "online", 00:13:22.009 "raid_level": "raid1", 00:13:22.009 "superblock": true, 00:13:22.009 "num_base_bdevs": 4, 00:13:22.009 "num_base_bdevs_discovered": 3, 00:13:22.009 "num_base_bdevs_operational": 3, 00:13:22.009 "base_bdevs_list": [ 00:13:22.009 { 00:13:22.009 "name": "spare", 00:13:22.009 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:22.009 "is_configured": true, 00:13:22.009 "data_offset": 2048, 00:13:22.009 "data_size": 63488 00:13:22.009 }, 00:13:22.009 { 00:13:22.009 "name": null, 00:13:22.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.009 "is_configured": false, 00:13:22.009 "data_offset": 0, 00:13:22.009 "data_size": 63488 00:13:22.009 }, 00:13:22.009 { 00:13:22.009 "name": "BaseBdev3", 00:13:22.009 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:22.009 "is_configured": true, 00:13:22.009 "data_offset": 2048, 00:13:22.009 "data_size": 63488 00:13:22.009 }, 00:13:22.009 { 00:13:22.009 "name": "BaseBdev4", 00:13:22.009 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:22.009 "is_configured": true, 00:13:22.009 "data_offset": 2048, 00:13:22.010 "data_size": 63488 00:13:22.010 } 00:13:22.010 ] 00:13:22.010 }' 00:13:22.010 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.269 01:33:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.269 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.269 "name": "raid_bdev1", 00:13:22.269 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:22.269 "strip_size_kb": 0, 00:13:22.269 "state": "online", 00:13:22.269 "raid_level": "raid1", 00:13:22.269 "superblock": true, 00:13:22.269 "num_base_bdevs": 4, 00:13:22.269 "num_base_bdevs_discovered": 3, 00:13:22.270 "num_base_bdevs_operational": 3, 00:13:22.270 "base_bdevs_list": [ 00:13:22.270 { 00:13:22.270 "name": "spare", 00:13:22.270 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:22.270 "is_configured": true, 00:13:22.270 "data_offset": 2048, 00:13:22.270 "data_size": 63488 00:13:22.270 }, 00:13:22.270 { 00:13:22.270 "name": null, 00:13:22.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.270 "is_configured": false, 00:13:22.270 "data_offset": 0, 00:13:22.270 "data_size": 63488 00:13:22.270 }, 00:13:22.270 { 00:13:22.270 "name": "BaseBdev3", 00:13:22.270 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:22.270 "is_configured": true, 00:13:22.270 "data_offset": 2048, 00:13:22.270 "data_size": 63488 00:13:22.270 }, 00:13:22.270 { 00:13:22.270 "name": "BaseBdev4", 00:13:22.270 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:22.270 "is_configured": true, 00:13:22.270 "data_offset": 2048, 00:13:22.270 "data_size": 63488 00:13:22.270 } 00:13:22.270 ] 00:13:22.270 }' 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.270 "name": "raid_bdev1", 00:13:22.270 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:22.270 "strip_size_kb": 0, 00:13:22.270 "state": "online", 00:13:22.270 "raid_level": "raid1", 00:13:22.270 "superblock": true, 00:13:22.270 "num_base_bdevs": 4, 00:13:22.270 "num_base_bdevs_discovered": 3, 00:13:22.270 "num_base_bdevs_operational": 3, 00:13:22.270 "base_bdevs_list": [ 00:13:22.270 { 00:13:22.270 "name": "spare", 00:13:22.270 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:22.270 "is_configured": true, 00:13:22.270 "data_offset": 2048, 00:13:22.270 "data_size": 63488 00:13:22.270 }, 00:13:22.270 { 00:13:22.270 "name": null, 00:13:22.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.270 "is_configured": false, 00:13:22.270 "data_offset": 0, 00:13:22.270 "data_size": 63488 00:13:22.270 }, 00:13:22.270 { 00:13:22.270 "name": "BaseBdev3", 00:13:22.270 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:22.270 "is_configured": true, 00:13:22.270 "data_offset": 2048, 00:13:22.270 "data_size": 63488 00:13:22.270 }, 00:13:22.270 { 00:13:22.270 "name": "BaseBdev4", 00:13:22.270 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:22.270 "is_configured": true, 00:13:22.270 "data_offset": 2048, 00:13:22.270 "data_size": 63488 00:13:22.270 } 00:13:22.270 ] 00:13:22.270 }' 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.270 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.839 77.50 IOPS, 232.50 MiB/s [2024-10-09T01:33:21.732Z] 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.839 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.839 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.839 [2024-10-09 01:33:21.507619] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.839 [2024-10-09 01:33:21.507652] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.839 00:13:22.839 Latency(us) 00:13:22.839 [2024-10-09T01:33:21.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.839 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:22.839 raid_bdev1 : 8.16 76.44 229.32 0.00 0.00 18462.16 287.39 119727.58 00:13:22.839 [2024-10-09T01:33:21.732Z] =================================================================================================================== 00:13:22.839 [2024-10-09T01:33:21.732Z] Total : 76.44 229.32 0.00 0.00 18462.16 287.39 119727.58 00:13:22.839 { 00:13:22.839 "results": [ 00:13:22.839 { 00:13:22.839 "job": "raid_bdev1", 00:13:22.839 "core_mask": "0x1", 00:13:22.839 "workload": "randrw", 00:13:22.839 "percentage": 50, 00:13:22.839 "status": "finished", 00:13:22.839 "queue_depth": 2, 00:13:22.839 "io_size": 3145728, 00:13:22.839 "runtime": 8.16343, 00:13:22.839 "iops": 76.4384578541128, 00:13:22.839 "mibps": 229.31537356233838, 00:13:22.839 "io_failed": 0, 00:13:22.839 "io_timeout": 0, 00:13:22.839 "avg_latency_us": 18462.164949144815, 00:13:22.839 "min_latency_us": 287.3947528981086, 00:13:22.839 "max_latency_us": 119727.58302100583 00:13:22.839 } 00:13:22.839 ], 00:13:22.839 "core_count": 1 00:13:22.839 } 00:13:22.839 [2024-10-09 01:33:21.594919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.839 [2024-10-09 01:33:21.594962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.839 [2024-10-09 01:33:21.595067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.840 [2024-10-09 01:33:21.595077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:22.840 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:23.101 /dev/nbd0 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.101 1+0 records in 00:13:23.101 1+0 records out 00:13:23.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056558 s, 7.2 MB/s 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.101 01:33:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:23.370 /dev/nbd1 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.370 1+0 records in 00:13:23.370 1+0 records out 00:13:23.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532323 s, 7.7 MB/s 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.370 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.649 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:23.909 /dev/nbd1 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.909 1+0 records in 00:13:23.909 1+0 records out 00:13:23.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589223 s, 7.0 MB/s 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.909 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.169 01:33:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.428 [2024-10-09 01:33:23.224064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:24.428 [2024-10-09 01:33:23.224170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.428 [2024-10-09 01:33:23.224218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:24.428 [2024-10-09 01:33:23.224246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.428 [2024-10-09 01:33:23.226786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.428 [2024-10-09 01:33:23.226858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:24.428 [2024-10-09 01:33:23.226974] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:24.428 [2024-10-09 01:33:23.227034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.428 [2024-10-09 01:33:23.227179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:24.428 [2024-10-09 01:33:23.227348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.428 spare 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.428 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.688 [2024-10-09 01:33:23.327467] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:24.688 [2024-10-09 01:33:23.327542] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.688 [2024-10-09 01:33:23.327858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:13:24.688 [2024-10-09 01:33:23.328058] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:24.688 [2024-10-09 01:33:23.328105] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:24.688 [2024-10-09 01:33:23.328287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.688 "name": "raid_bdev1", 00:13:24.688 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:24.688 "strip_size_kb": 0, 00:13:24.688 "state": "online", 00:13:24.688 "raid_level": "raid1", 00:13:24.688 "superblock": true, 00:13:24.688 "num_base_bdevs": 4, 00:13:24.688 "num_base_bdevs_discovered": 3, 00:13:24.688 "num_base_bdevs_operational": 3, 00:13:24.688 "base_bdevs_list": [ 00:13:24.688 { 00:13:24.688 "name": "spare", 00:13:24.688 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:24.688 "is_configured": true, 00:13:24.688 "data_offset": 2048, 00:13:24.688 "data_size": 63488 00:13:24.688 }, 00:13:24.688 { 00:13:24.688 "name": null, 00:13:24.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.688 "is_configured": false, 00:13:24.688 "data_offset": 2048, 00:13:24.688 "data_size": 63488 00:13:24.688 }, 00:13:24.688 { 00:13:24.688 "name": "BaseBdev3", 00:13:24.688 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:24.688 "is_configured": true, 00:13:24.688 "data_offset": 2048, 00:13:24.688 "data_size": 63488 00:13:24.688 }, 00:13:24.688 { 00:13:24.688 "name": "BaseBdev4", 00:13:24.688 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:24.688 "is_configured": true, 00:13:24.688 "data_offset": 2048, 00:13:24.688 "data_size": 63488 00:13:24.688 } 00:13:24.688 ] 00:13:24.688 }' 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.688 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.947 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.947 "name": "raid_bdev1", 00:13:24.947 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:24.947 "strip_size_kb": 0, 00:13:24.947 "state": "online", 00:13:24.947 "raid_level": "raid1", 00:13:24.947 "superblock": true, 00:13:24.947 "num_base_bdevs": 4, 00:13:24.947 "num_base_bdevs_discovered": 3, 00:13:24.947 "num_base_bdevs_operational": 3, 00:13:24.947 "base_bdevs_list": [ 00:13:24.947 { 00:13:24.947 "name": "spare", 00:13:24.947 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:24.947 "is_configured": true, 00:13:24.947 "data_offset": 2048, 00:13:24.947 "data_size": 63488 00:13:24.947 }, 00:13:24.947 { 00:13:24.947 "name": null, 00:13:24.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.947 "is_configured": false, 00:13:24.947 "data_offset": 2048, 00:13:24.947 "data_size": 63488 00:13:24.947 }, 00:13:24.947 { 00:13:24.947 "name": "BaseBdev3", 00:13:24.947 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:24.947 "is_configured": true, 00:13:24.947 "data_offset": 2048, 00:13:24.947 "data_size": 63488 00:13:24.947 }, 00:13:24.947 { 00:13:24.947 "name": "BaseBdev4", 00:13:24.947 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:24.947 "is_configured": true, 00:13:24.947 "data_offset": 2048, 00:13:24.947 "data_size": 63488 00:13:24.947 } 00:13:24.947 ] 00:13:24.947 }' 00:13:24.948 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.207 [2024-10-09 01:33:23.956559] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.207 01:33:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.207 01:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.207 "name": "raid_bdev1", 00:13:25.207 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:25.207 "strip_size_kb": 0, 00:13:25.207 "state": "online", 00:13:25.207 "raid_level": "raid1", 00:13:25.207 "superblock": true, 00:13:25.207 "num_base_bdevs": 4, 00:13:25.207 "num_base_bdevs_discovered": 2, 00:13:25.207 "num_base_bdevs_operational": 2, 00:13:25.207 "base_bdevs_list": [ 00:13:25.207 { 00:13:25.207 "name": null, 00:13:25.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.207 "is_configured": false, 00:13:25.207 "data_offset": 0, 00:13:25.207 "data_size": 63488 00:13:25.207 }, 00:13:25.207 { 00:13:25.207 "name": null, 00:13:25.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.207 "is_configured": false, 00:13:25.207 "data_offset": 2048, 00:13:25.207 "data_size": 63488 00:13:25.207 }, 00:13:25.207 { 00:13:25.207 "name": "BaseBdev3", 00:13:25.207 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:25.207 "is_configured": true, 00:13:25.207 "data_offset": 2048, 00:13:25.207 "data_size": 63488 00:13:25.207 }, 00:13:25.207 { 00:13:25.207 "name": "BaseBdev4", 00:13:25.207 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:25.207 "is_configured": true, 00:13:25.207 "data_offset": 2048, 00:13:25.207 "data_size": 63488 00:13:25.207 } 00:13:25.207 ] 00:13:25.207 }' 00:13:25.207 01:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.207 01:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.775 01:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:25.775 01:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.775 01:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.775 [2024-10-09 01:33:24.416724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.775 [2024-10-09 01:33:24.416961] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:25.775 [2024-10-09 01:33:24.417022] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:25.775 [2024-10-09 01:33:24.417141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.776 [2024-10-09 01:33:24.423466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:13:25.776 01:33:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.776 01:33:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:25.776 [2024-10-09 01:33:24.425649] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.713 "name": "raid_bdev1", 00:13:26.713 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:26.713 "strip_size_kb": 0, 00:13:26.713 "state": "online", 00:13:26.713 "raid_level": "raid1", 00:13:26.713 "superblock": true, 00:13:26.713 "num_base_bdevs": 4, 00:13:26.713 "num_base_bdevs_discovered": 3, 00:13:26.713 "num_base_bdevs_operational": 3, 00:13:26.713 "process": { 00:13:26.713 "type": "rebuild", 00:13:26.713 "target": "spare", 00:13:26.713 "progress": { 00:13:26.713 "blocks": 20480, 00:13:26.713 "percent": 32 00:13:26.713 } 00:13:26.713 }, 00:13:26.713 "base_bdevs_list": [ 00:13:26.713 { 00:13:26.713 "name": "spare", 00:13:26.713 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:26.713 "is_configured": true, 00:13:26.713 "data_offset": 2048, 00:13:26.713 "data_size": 63488 00:13:26.713 }, 00:13:26.713 { 00:13:26.713 "name": null, 00:13:26.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.713 "is_configured": false, 00:13:26.713 "data_offset": 2048, 00:13:26.713 "data_size": 63488 00:13:26.713 }, 00:13:26.713 { 00:13:26.713 "name": "BaseBdev3", 00:13:26.713 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:26.713 "is_configured": true, 00:13:26.713 "data_offset": 2048, 00:13:26.713 "data_size": 63488 00:13:26.713 }, 00:13:26.713 { 00:13:26.713 "name": "BaseBdev4", 00:13:26.713 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:26.713 "is_configured": true, 00:13:26.713 "data_offset": 2048, 00:13:26.713 "data_size": 63488 00:13:26.713 } 00:13:26.713 ] 00:13:26.713 }' 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.713 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.713 [2024-10-09 01:33:25.563537] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.973 [2024-10-09 01:33:25.635460] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:26.973 [2024-10-09 01:33:25.635518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.973 [2024-10-09 01:33:25.635549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.973 [2024-10-09 01:33:25.635556] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.973 "name": "raid_bdev1", 00:13:26.973 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:26.973 "strip_size_kb": 0, 00:13:26.973 "state": "online", 00:13:26.973 "raid_level": "raid1", 00:13:26.973 "superblock": true, 00:13:26.973 "num_base_bdevs": 4, 00:13:26.973 "num_base_bdevs_discovered": 2, 00:13:26.973 "num_base_bdevs_operational": 2, 00:13:26.973 "base_bdevs_list": [ 00:13:26.973 { 00:13:26.973 "name": null, 00:13:26.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.973 "is_configured": false, 00:13:26.973 "data_offset": 0, 00:13:26.973 "data_size": 63488 00:13:26.973 }, 00:13:26.973 { 00:13:26.973 "name": null, 00:13:26.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.973 "is_configured": false, 00:13:26.973 "data_offset": 2048, 00:13:26.973 "data_size": 63488 00:13:26.973 }, 00:13:26.973 { 00:13:26.973 "name": "BaseBdev3", 00:13:26.973 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:26.973 "is_configured": true, 00:13:26.973 "data_offset": 2048, 00:13:26.973 "data_size": 63488 00:13:26.973 }, 00:13:26.973 { 00:13:26.973 "name": "BaseBdev4", 00:13:26.973 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:26.973 "is_configured": true, 00:13:26.973 "data_offset": 2048, 00:13:26.973 "data_size": 63488 00:13:26.973 } 00:13:26.973 ] 00:13:26.973 }' 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.973 01:33:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.539 01:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.539 01:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.539 01:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.539 [2024-10-09 01:33:26.133766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.539 [2024-10-09 01:33:26.133827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.539 [2024-10-09 01:33:26.133859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:27.539 [2024-10-09 01:33:26.133869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.539 [2024-10-09 01:33:26.134358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.539 [2024-10-09 01:33:26.134374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.539 [2024-10-09 01:33:26.134467] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:27.539 [2024-10-09 01:33:26.134478] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:27.539 [2024-10-09 01:33:26.134491] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:27.539 [2024-10-09 01:33:26.134511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.539 [2024-10-09 01:33:26.140517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000373d0 00:13:27.539 spare 00:13:27.539 01:33:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.539 01:33:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:27.539 [2024-10-09 01:33:26.142701] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.477 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.478 "name": "raid_bdev1", 00:13:28.478 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:28.478 "strip_size_kb": 0, 00:13:28.478 "state": "online", 00:13:28.478 "raid_level": "raid1", 00:13:28.478 "superblock": true, 00:13:28.478 "num_base_bdevs": 4, 00:13:28.478 "num_base_bdevs_discovered": 3, 00:13:28.478 "num_base_bdevs_operational": 3, 00:13:28.478 "process": { 00:13:28.478 "type": "rebuild", 00:13:28.478 "target": "spare", 00:13:28.478 "progress": { 00:13:28.478 "blocks": 20480, 00:13:28.478 "percent": 32 00:13:28.478 } 00:13:28.478 }, 00:13:28.478 "base_bdevs_list": [ 00:13:28.478 { 00:13:28.478 "name": "spare", 00:13:28.478 "uuid": "8afac58c-8367-53db-b399-9fb90669a7c0", 00:13:28.478 "is_configured": true, 00:13:28.478 "data_offset": 2048, 00:13:28.478 "data_size": 63488 00:13:28.478 }, 00:13:28.478 { 00:13:28.478 "name": null, 00:13:28.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.478 "is_configured": false, 00:13:28.478 "data_offset": 2048, 00:13:28.478 "data_size": 63488 00:13:28.478 }, 00:13:28.478 { 00:13:28.478 "name": "BaseBdev3", 00:13:28.478 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:28.478 "is_configured": true, 00:13:28.478 "data_offset": 2048, 00:13:28.478 "data_size": 63488 00:13:28.478 }, 00:13:28.478 { 00:13:28.478 "name": "BaseBdev4", 00:13:28.478 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:28.478 "is_configured": true, 00:13:28.478 "data_offset": 2048, 00:13:28.478 "data_size": 63488 00:13:28.478 } 00:13:28.478 ] 00:13:28.478 }' 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.478 [2024-10-09 01:33:27.280713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.478 [2024-10-09 01:33:27.352567] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.478 [2024-10-09 01:33:27.352680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.478 [2024-10-09 01:33:27.352699] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.478 [2024-10-09 01:33:27.352710] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.478 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.737 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.737 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.737 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.737 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.737 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.737 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.737 "name": "raid_bdev1", 00:13:28.737 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:28.737 "strip_size_kb": 0, 00:13:28.737 "state": "online", 00:13:28.737 "raid_level": "raid1", 00:13:28.737 "superblock": true, 00:13:28.737 "num_base_bdevs": 4, 00:13:28.737 "num_base_bdevs_discovered": 2, 00:13:28.737 "num_base_bdevs_operational": 2, 00:13:28.737 "base_bdevs_list": [ 00:13:28.737 { 00:13:28.737 "name": null, 00:13:28.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.737 "is_configured": false, 00:13:28.737 "data_offset": 0, 00:13:28.737 "data_size": 63488 00:13:28.737 }, 00:13:28.737 { 00:13:28.737 "name": null, 00:13:28.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.737 "is_configured": false, 00:13:28.737 "data_offset": 2048, 00:13:28.737 "data_size": 63488 00:13:28.737 }, 00:13:28.737 { 00:13:28.737 "name": "BaseBdev3", 00:13:28.737 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:28.737 "is_configured": true, 00:13:28.737 "data_offset": 2048, 00:13:28.737 "data_size": 63488 00:13:28.737 }, 00:13:28.737 { 00:13:28.737 "name": "BaseBdev4", 00:13:28.737 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:28.737 "is_configured": true, 00:13:28.737 "data_offset": 2048, 00:13:28.737 "data_size": 63488 00:13:28.737 } 00:13:28.737 ] 00:13:28.737 }' 00:13:28.737 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.737 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.997 "name": "raid_bdev1", 00:13:28.997 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:28.997 "strip_size_kb": 0, 00:13:28.997 "state": "online", 00:13:28.997 "raid_level": "raid1", 00:13:28.997 "superblock": true, 00:13:28.997 "num_base_bdevs": 4, 00:13:28.997 "num_base_bdevs_discovered": 2, 00:13:28.997 "num_base_bdevs_operational": 2, 00:13:28.997 "base_bdevs_list": [ 00:13:28.997 { 00:13:28.997 "name": null, 00:13:28.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.997 "is_configured": false, 00:13:28.997 "data_offset": 0, 00:13:28.997 "data_size": 63488 00:13:28.997 }, 00:13:28.997 { 00:13:28.997 "name": null, 00:13:28.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.997 "is_configured": false, 00:13:28.997 "data_offset": 2048, 00:13:28.997 "data_size": 63488 00:13:28.997 }, 00:13:28.997 { 00:13:28.997 "name": "BaseBdev3", 00:13:28.997 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:28.997 "is_configured": true, 00:13:28.997 "data_offset": 2048, 00:13:28.997 "data_size": 63488 00:13:28.997 }, 00:13:28.997 { 00:13:28.997 "name": "BaseBdev4", 00:13:28.997 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:28.997 "is_configured": true, 00:13:28.997 "data_offset": 2048, 00:13:28.997 "data_size": 63488 00:13:28.997 } 00:13:28.997 ] 00:13:28.997 }' 00:13:28.997 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.257 [2024-10-09 01:33:27.955363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.257 [2024-10-09 01:33:27.955422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.257 [2024-10-09 01:33:27.955444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:29.257 [2024-10-09 01:33:27.955455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.257 [2024-10-09 01:33:27.955896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.257 [2024-10-09 01:33:27.955918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.257 [2024-10-09 01:33:27.955992] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:29.257 [2024-10-09 01:33:27.956008] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:29.257 [2024-10-09 01:33:27.956016] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:29.257 [2024-10-09 01:33:27.956029] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:29.257 BaseBdev1 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.257 01:33:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.195 01:33:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.195 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.195 "name": "raid_bdev1", 00:13:30.195 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:30.195 "strip_size_kb": 0, 00:13:30.195 "state": "online", 00:13:30.195 "raid_level": "raid1", 00:13:30.195 "superblock": true, 00:13:30.195 "num_base_bdevs": 4, 00:13:30.195 "num_base_bdevs_discovered": 2, 00:13:30.195 "num_base_bdevs_operational": 2, 00:13:30.195 "base_bdevs_list": [ 00:13:30.195 { 00:13:30.195 "name": null, 00:13:30.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.195 "is_configured": false, 00:13:30.195 "data_offset": 0, 00:13:30.195 "data_size": 63488 00:13:30.195 }, 00:13:30.195 { 00:13:30.195 "name": null, 00:13:30.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.195 "is_configured": false, 00:13:30.195 "data_offset": 2048, 00:13:30.195 "data_size": 63488 00:13:30.195 }, 00:13:30.195 { 00:13:30.195 "name": "BaseBdev3", 00:13:30.195 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:30.195 "is_configured": true, 00:13:30.195 "data_offset": 2048, 00:13:30.195 "data_size": 63488 00:13:30.195 }, 00:13:30.195 { 00:13:30.195 "name": "BaseBdev4", 00:13:30.195 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:30.195 "is_configured": true, 00:13:30.195 "data_offset": 2048, 00:13:30.195 "data_size": 63488 00:13:30.195 } 00:13:30.195 ] 00:13:30.195 }' 00:13:30.195 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.195 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.764 "name": "raid_bdev1", 00:13:30.764 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:30.764 "strip_size_kb": 0, 00:13:30.764 "state": "online", 00:13:30.764 "raid_level": "raid1", 00:13:30.764 "superblock": true, 00:13:30.764 "num_base_bdevs": 4, 00:13:30.764 "num_base_bdevs_discovered": 2, 00:13:30.764 "num_base_bdevs_operational": 2, 00:13:30.764 "base_bdevs_list": [ 00:13:30.764 { 00:13:30.764 "name": null, 00:13:30.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.764 "is_configured": false, 00:13:30.764 "data_offset": 0, 00:13:30.764 "data_size": 63488 00:13:30.764 }, 00:13:30.764 { 00:13:30.764 "name": null, 00:13:30.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.764 "is_configured": false, 00:13:30.764 "data_offset": 2048, 00:13:30.764 "data_size": 63488 00:13:30.764 }, 00:13:30.764 { 00:13:30.764 "name": "BaseBdev3", 00:13:30.764 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:30.764 "is_configured": true, 00:13:30.764 "data_offset": 2048, 00:13:30.764 "data_size": 63488 00:13:30.764 }, 00:13:30.764 { 00:13:30.764 "name": "BaseBdev4", 00:13:30.764 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:30.764 "is_configured": true, 00:13:30.764 "data_offset": 2048, 00:13:30.764 "data_size": 63488 00:13:30.764 } 00:13:30.764 ] 00:13:30.764 }' 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.764 [2024-10-09 01:33:29.571953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.764 [2024-10-09 01:33:29.572164] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:30.764 [2024-10-09 01:33:29.572197] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:30.764 request: 00:13:30.764 { 00:13:30.764 "base_bdev": "BaseBdev1", 00:13:30.764 "raid_bdev": "raid_bdev1", 00:13:30.764 "method": "bdev_raid_add_base_bdev", 00:13:30.764 "req_id": 1 00:13:30.764 } 00:13:30.764 Got JSON-RPC error response 00:13:30.764 response: 00:13:30.764 { 00:13:30.764 "code": -22, 00:13:30.764 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:30.764 } 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:30.764 01:33:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.700 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.959 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.959 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.959 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.959 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.959 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.959 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.959 "name": "raid_bdev1", 00:13:31.959 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:31.959 "strip_size_kb": 0, 00:13:31.959 "state": "online", 00:13:31.959 "raid_level": "raid1", 00:13:31.959 "superblock": true, 00:13:31.959 "num_base_bdevs": 4, 00:13:31.959 "num_base_bdevs_discovered": 2, 00:13:31.959 "num_base_bdevs_operational": 2, 00:13:31.959 "base_bdevs_list": [ 00:13:31.959 { 00:13:31.959 "name": null, 00:13:31.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.959 "is_configured": false, 00:13:31.959 "data_offset": 0, 00:13:31.959 "data_size": 63488 00:13:31.959 }, 00:13:31.959 { 00:13:31.959 "name": null, 00:13:31.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.959 "is_configured": false, 00:13:31.959 "data_offset": 2048, 00:13:31.959 "data_size": 63488 00:13:31.959 }, 00:13:31.959 { 00:13:31.959 "name": "BaseBdev3", 00:13:31.959 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:31.959 "is_configured": true, 00:13:31.959 "data_offset": 2048, 00:13:31.959 "data_size": 63488 00:13:31.959 }, 00:13:31.959 { 00:13:31.959 "name": "BaseBdev4", 00:13:31.959 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:31.959 "is_configured": true, 00:13:31.959 "data_offset": 2048, 00:13:31.959 "data_size": 63488 00:13:31.959 } 00:13:31.959 ] 00:13:31.959 }' 00:13:31.959 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.959 01:33:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.218 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.218 "name": "raid_bdev1", 00:13:32.219 "uuid": "f3a792fb-c3ba-46ec-a938-82e279d9da14", 00:13:32.219 "strip_size_kb": 0, 00:13:32.219 "state": "online", 00:13:32.219 "raid_level": "raid1", 00:13:32.219 "superblock": true, 00:13:32.219 "num_base_bdevs": 4, 00:13:32.219 "num_base_bdevs_discovered": 2, 00:13:32.219 "num_base_bdevs_operational": 2, 00:13:32.219 "base_bdevs_list": [ 00:13:32.219 { 00:13:32.219 "name": null, 00:13:32.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.219 "is_configured": false, 00:13:32.219 "data_offset": 0, 00:13:32.219 "data_size": 63488 00:13:32.219 }, 00:13:32.219 { 00:13:32.219 "name": null, 00:13:32.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.219 "is_configured": false, 00:13:32.219 "data_offset": 2048, 00:13:32.219 "data_size": 63488 00:13:32.219 }, 00:13:32.219 { 00:13:32.219 "name": "BaseBdev3", 00:13:32.219 "uuid": "fd25ec3e-7374-52d6-bcf2-37f8b635aa0c", 00:13:32.219 "is_configured": true, 00:13:32.219 "data_offset": 2048, 00:13:32.219 "data_size": 63488 00:13:32.219 }, 00:13:32.219 { 00:13:32.219 "name": "BaseBdev4", 00:13:32.219 "uuid": "e767daeb-15a5-59cc-8af8-035250dc627d", 00:13:32.219 "is_configured": true, 00:13:32.219 "data_offset": 2048, 00:13:32.219 "data_size": 63488 00:13:32.219 } 00:13:32.219 ] 00:13:32.219 }' 00:13:32.219 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 90890 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 90890 ']' 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 90890 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90890 00:13:32.478 killing process with pid 90890 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90890' 00:13:32.478 Received shutdown signal, test time was about 17.815351 seconds 00:13:32.478 00:13:32.478 Latency(us) 00:13:32.478 [2024-10-09T01:33:31.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.478 [2024-10-09T01:33:31.371Z] =================================================================================================================== 00:13:32.478 [2024-10-09T01:33:31.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 90890 00:13:32.478 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 90890 00:13:32.478 [2024-10-09 01:33:31.244239] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:32.478 [2024-10-09 01:33:31.244440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:32.478 [2024-10-09 01:33:31.244544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:32.478 [2024-10-09 01:33:31.244556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:32.478 [2024-10-09 01:33:31.329336] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.048 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.048 00:13:33.048 real 0m20.043s 00:13:33.048 user 0m26.404s 00:13:33.048 sys 0m2.740s 00:13:33.048 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.048 ************************************ 00:13:33.048 END TEST raid_rebuild_test_sb_io 00:13:33.048 ************************************ 00:13:33.048 01:33:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.048 01:33:31 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:33.048 01:33:31 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:33.048 01:33:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:33.048 01:33:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.048 01:33:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.048 ************************************ 00:13:33.048 START TEST raid5f_state_function_test 00:13:33.048 ************************************ 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=91595 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91595' 00:13:33.048 Process raid pid: 91595 00:13:33.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 91595 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 91595 ']' 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:33.048 01:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.048 [2024-10-09 01:33:31.883457] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:33.048 [2024-10-09 01:33:31.883585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.308 [2024-10-09 01:33:32.017514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:33.308 [2024-10-09 01:33:32.046444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.308 [2024-10-09 01:33:32.117662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.308 [2024-10-09 01:33:32.193194] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.308 [2024-10-09 01:33:32.193224] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.877 [2024-10-09 01:33:32.706216] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.877 [2024-10-09 01:33:32.706279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.877 [2024-10-09 01:33:32.706294] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.877 [2024-10-09 01:33:32.706301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.877 [2024-10-09 01:33:32.706312] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.877 [2024-10-09 01:33:32.706318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.877 "name": "Existed_Raid", 00:13:33.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.877 "strip_size_kb": 64, 00:13:33.877 "state": "configuring", 00:13:33.877 "raid_level": "raid5f", 00:13:33.877 "superblock": false, 00:13:33.877 "num_base_bdevs": 3, 00:13:33.877 "num_base_bdevs_discovered": 0, 00:13:33.877 "num_base_bdevs_operational": 3, 00:13:33.877 "base_bdevs_list": [ 00:13:33.877 { 00:13:33.877 "name": "BaseBdev1", 00:13:33.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.877 "is_configured": false, 00:13:33.877 "data_offset": 0, 00:13:33.877 "data_size": 0 00:13:33.877 }, 00:13:33.877 { 00:13:33.877 "name": "BaseBdev2", 00:13:33.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.877 "is_configured": false, 00:13:33.877 "data_offset": 0, 00:13:33.877 "data_size": 0 00:13:33.877 }, 00:13:33.877 { 00:13:33.877 "name": "BaseBdev3", 00:13:33.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.877 "is_configured": false, 00:13:33.877 "data_offset": 0, 00:13:33.877 "data_size": 0 00:13:33.877 } 00:13:33.877 ] 00:13:33.877 }' 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.877 01:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.446 [2024-10-09 01:33:33.142226] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.446 [2024-10-09 01:33:33.142329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.446 [2024-10-09 01:33:33.154241] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.446 [2024-10-09 01:33:33.154318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.446 [2024-10-09 01:33:33.154348] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.446 [2024-10-09 01:33:33.154368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.446 [2024-10-09 01:33:33.154386] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.446 [2024-10-09 01:33:33.154403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.446 [2024-10-09 01:33:33.181148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.446 BaseBdev1 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.446 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.446 [ 00:13:34.446 { 00:13:34.446 "name": "BaseBdev1", 00:13:34.446 "aliases": [ 00:13:34.446 "cd45591b-6e15-4681-b49c-3b9c984da6cd" 00:13:34.446 ], 00:13:34.446 "product_name": "Malloc disk", 00:13:34.446 "block_size": 512, 00:13:34.446 "num_blocks": 65536, 00:13:34.446 "uuid": "cd45591b-6e15-4681-b49c-3b9c984da6cd", 00:13:34.446 "assigned_rate_limits": { 00:13:34.446 "rw_ios_per_sec": 0, 00:13:34.446 "rw_mbytes_per_sec": 0, 00:13:34.446 "r_mbytes_per_sec": 0, 00:13:34.446 "w_mbytes_per_sec": 0 00:13:34.446 }, 00:13:34.446 "claimed": true, 00:13:34.446 "claim_type": "exclusive_write", 00:13:34.446 "zoned": false, 00:13:34.446 "supported_io_types": { 00:13:34.446 "read": true, 00:13:34.446 "write": true, 00:13:34.446 "unmap": true, 00:13:34.446 "flush": true, 00:13:34.446 "reset": true, 00:13:34.446 "nvme_admin": false, 00:13:34.446 "nvme_io": false, 00:13:34.446 "nvme_io_md": false, 00:13:34.446 "write_zeroes": true, 00:13:34.446 "zcopy": true, 00:13:34.446 "get_zone_info": false, 00:13:34.446 "zone_management": false, 00:13:34.446 "zone_append": false, 00:13:34.446 "compare": false, 00:13:34.446 "compare_and_write": false, 00:13:34.446 "abort": true, 00:13:34.446 "seek_hole": false, 00:13:34.446 "seek_data": false, 00:13:34.446 "copy": true, 00:13:34.446 "nvme_iov_md": false 00:13:34.446 }, 00:13:34.446 "memory_domains": [ 00:13:34.446 { 00:13:34.446 "dma_device_id": "system", 00:13:34.446 "dma_device_type": 1 00:13:34.446 }, 00:13:34.446 { 00:13:34.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.447 "dma_device_type": 2 00:13:34.447 } 00:13:34.447 ], 00:13:34.447 "driver_specific": {} 00:13:34.447 } 00:13:34.447 ] 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.447 "name": "Existed_Raid", 00:13:34.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.447 "strip_size_kb": 64, 00:13:34.447 "state": "configuring", 00:13:34.447 "raid_level": "raid5f", 00:13:34.447 "superblock": false, 00:13:34.447 "num_base_bdevs": 3, 00:13:34.447 "num_base_bdevs_discovered": 1, 00:13:34.447 "num_base_bdevs_operational": 3, 00:13:34.447 "base_bdevs_list": [ 00:13:34.447 { 00:13:34.447 "name": "BaseBdev1", 00:13:34.447 "uuid": "cd45591b-6e15-4681-b49c-3b9c984da6cd", 00:13:34.447 "is_configured": true, 00:13:34.447 "data_offset": 0, 00:13:34.447 "data_size": 65536 00:13:34.447 }, 00:13:34.447 { 00:13:34.447 "name": "BaseBdev2", 00:13:34.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.447 "is_configured": false, 00:13:34.447 "data_offset": 0, 00:13:34.447 "data_size": 0 00:13:34.447 }, 00:13:34.447 { 00:13:34.447 "name": "BaseBdev3", 00:13:34.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.447 "is_configured": false, 00:13:34.447 "data_offset": 0, 00:13:34.447 "data_size": 0 00:13:34.447 } 00:13:34.447 ] 00:13:34.447 }' 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.447 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.016 [2024-10-09 01:33:33.701306] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.016 [2024-10-09 01:33:33.701364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.016 [2024-10-09 01:33:33.713330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.016 [2024-10-09 01:33:33.715400] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.016 [2024-10-09 01:33:33.715486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.016 [2024-10-09 01:33:33.715504] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.016 [2024-10-09 01:33:33.715511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.016 "name": "Existed_Raid", 00:13:35.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.016 "strip_size_kb": 64, 00:13:35.016 "state": "configuring", 00:13:35.016 "raid_level": "raid5f", 00:13:35.016 "superblock": false, 00:13:35.016 "num_base_bdevs": 3, 00:13:35.016 "num_base_bdevs_discovered": 1, 00:13:35.016 "num_base_bdevs_operational": 3, 00:13:35.016 "base_bdevs_list": [ 00:13:35.016 { 00:13:35.016 "name": "BaseBdev1", 00:13:35.016 "uuid": "cd45591b-6e15-4681-b49c-3b9c984da6cd", 00:13:35.016 "is_configured": true, 00:13:35.016 "data_offset": 0, 00:13:35.016 "data_size": 65536 00:13:35.016 }, 00:13:35.016 { 00:13:35.016 "name": "BaseBdev2", 00:13:35.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.016 "is_configured": false, 00:13:35.016 "data_offset": 0, 00:13:35.016 "data_size": 0 00:13:35.016 }, 00:13:35.016 { 00:13:35.016 "name": "BaseBdev3", 00:13:35.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.016 "is_configured": false, 00:13:35.016 "data_offset": 0, 00:13:35.016 "data_size": 0 00:13:35.016 } 00:13:35.016 ] 00:13:35.016 }' 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.016 01:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.584 [2024-10-09 01:33:34.198914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.584 BaseBdev2 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.584 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.584 [ 00:13:35.584 { 00:13:35.584 "name": "BaseBdev2", 00:13:35.584 "aliases": [ 00:13:35.584 "5bea9959-8fc2-41d5-99eb-8c35a4957582" 00:13:35.584 ], 00:13:35.584 "product_name": "Malloc disk", 00:13:35.584 "block_size": 512, 00:13:35.584 "num_blocks": 65536, 00:13:35.584 "uuid": "5bea9959-8fc2-41d5-99eb-8c35a4957582", 00:13:35.584 "assigned_rate_limits": { 00:13:35.584 "rw_ios_per_sec": 0, 00:13:35.584 "rw_mbytes_per_sec": 0, 00:13:35.585 "r_mbytes_per_sec": 0, 00:13:35.585 "w_mbytes_per_sec": 0 00:13:35.585 }, 00:13:35.585 "claimed": true, 00:13:35.585 "claim_type": "exclusive_write", 00:13:35.585 "zoned": false, 00:13:35.585 "supported_io_types": { 00:13:35.585 "read": true, 00:13:35.585 "write": true, 00:13:35.585 "unmap": true, 00:13:35.585 "flush": true, 00:13:35.585 "reset": true, 00:13:35.585 "nvme_admin": false, 00:13:35.585 "nvme_io": false, 00:13:35.585 "nvme_io_md": false, 00:13:35.585 "write_zeroes": true, 00:13:35.585 "zcopy": true, 00:13:35.585 "get_zone_info": false, 00:13:35.585 "zone_management": false, 00:13:35.585 "zone_append": false, 00:13:35.585 "compare": false, 00:13:35.585 "compare_and_write": false, 00:13:35.585 "abort": true, 00:13:35.585 "seek_hole": false, 00:13:35.585 "seek_data": false, 00:13:35.585 "copy": true, 00:13:35.585 "nvme_iov_md": false 00:13:35.585 }, 00:13:35.585 "memory_domains": [ 00:13:35.585 { 00:13:35.585 "dma_device_id": "system", 00:13:35.585 "dma_device_type": 1 00:13:35.585 }, 00:13:35.585 { 00:13:35.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.585 "dma_device_type": 2 00:13:35.585 } 00:13:35.585 ], 00:13:35.585 "driver_specific": {} 00:13:35.585 } 00:13:35.585 ] 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.585 "name": "Existed_Raid", 00:13:35.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.585 "strip_size_kb": 64, 00:13:35.585 "state": "configuring", 00:13:35.585 "raid_level": "raid5f", 00:13:35.585 "superblock": false, 00:13:35.585 "num_base_bdevs": 3, 00:13:35.585 "num_base_bdevs_discovered": 2, 00:13:35.585 "num_base_bdevs_operational": 3, 00:13:35.585 "base_bdevs_list": [ 00:13:35.585 { 00:13:35.585 "name": "BaseBdev1", 00:13:35.585 "uuid": "cd45591b-6e15-4681-b49c-3b9c984da6cd", 00:13:35.585 "is_configured": true, 00:13:35.585 "data_offset": 0, 00:13:35.585 "data_size": 65536 00:13:35.585 }, 00:13:35.585 { 00:13:35.585 "name": "BaseBdev2", 00:13:35.585 "uuid": "5bea9959-8fc2-41d5-99eb-8c35a4957582", 00:13:35.585 "is_configured": true, 00:13:35.585 "data_offset": 0, 00:13:35.585 "data_size": 65536 00:13:35.585 }, 00:13:35.585 { 00:13:35.585 "name": "BaseBdev3", 00:13:35.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.585 "is_configured": false, 00:13:35.585 "data_offset": 0, 00:13:35.585 "data_size": 0 00:13:35.585 } 00:13:35.585 ] 00:13:35.585 }' 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.585 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.844 [2024-10-09 01:33:34.707696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.844 [2024-10-09 01:33:34.707825] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:35.844 [2024-10-09 01:33:34.707839] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:35.844 [2024-10-09 01:33:34.708152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:35.844 [2024-10-09 01:33:34.708620] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:35.844 [2024-10-09 01:33:34.708654] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:35.844 [2024-10-09 01:33:34.708897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.844 BaseBdev3 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.844 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.103 [ 00:13:36.103 { 00:13:36.103 "name": "BaseBdev3", 00:13:36.103 "aliases": [ 00:13:36.103 "e5c0ca19-3321-4714-a7aa-6beecea51942" 00:13:36.103 ], 00:13:36.103 "product_name": "Malloc disk", 00:13:36.103 "block_size": 512, 00:13:36.103 "num_blocks": 65536, 00:13:36.103 "uuid": "e5c0ca19-3321-4714-a7aa-6beecea51942", 00:13:36.103 "assigned_rate_limits": { 00:13:36.103 "rw_ios_per_sec": 0, 00:13:36.103 "rw_mbytes_per_sec": 0, 00:13:36.103 "r_mbytes_per_sec": 0, 00:13:36.103 "w_mbytes_per_sec": 0 00:13:36.103 }, 00:13:36.103 "claimed": true, 00:13:36.103 "claim_type": "exclusive_write", 00:13:36.103 "zoned": false, 00:13:36.103 "supported_io_types": { 00:13:36.103 "read": true, 00:13:36.103 "write": true, 00:13:36.103 "unmap": true, 00:13:36.103 "flush": true, 00:13:36.103 "reset": true, 00:13:36.103 "nvme_admin": false, 00:13:36.103 "nvme_io": false, 00:13:36.103 "nvme_io_md": false, 00:13:36.103 "write_zeroes": true, 00:13:36.103 "zcopy": true, 00:13:36.103 "get_zone_info": false, 00:13:36.103 "zone_management": false, 00:13:36.103 "zone_append": false, 00:13:36.103 "compare": false, 00:13:36.103 "compare_and_write": false, 00:13:36.103 "abort": true, 00:13:36.103 "seek_hole": false, 00:13:36.103 "seek_data": false, 00:13:36.103 "copy": true, 00:13:36.103 "nvme_iov_md": false 00:13:36.103 }, 00:13:36.103 "memory_domains": [ 00:13:36.103 { 00:13:36.103 "dma_device_id": "system", 00:13:36.103 "dma_device_type": 1 00:13:36.103 }, 00:13:36.103 { 00:13:36.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.103 "dma_device_type": 2 00:13:36.103 } 00:13:36.103 ], 00:13:36.103 "driver_specific": {} 00:13:36.103 } 00:13:36.103 ] 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.103 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.104 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.104 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.104 "name": "Existed_Raid", 00:13:36.104 "uuid": "d65c786a-d840-4dd5-a7ec-647687b2ddd7", 00:13:36.104 "strip_size_kb": 64, 00:13:36.104 "state": "online", 00:13:36.104 "raid_level": "raid5f", 00:13:36.104 "superblock": false, 00:13:36.104 "num_base_bdevs": 3, 00:13:36.104 "num_base_bdevs_discovered": 3, 00:13:36.104 "num_base_bdevs_operational": 3, 00:13:36.104 "base_bdevs_list": [ 00:13:36.104 { 00:13:36.104 "name": "BaseBdev1", 00:13:36.104 "uuid": "cd45591b-6e15-4681-b49c-3b9c984da6cd", 00:13:36.104 "is_configured": true, 00:13:36.104 "data_offset": 0, 00:13:36.104 "data_size": 65536 00:13:36.104 }, 00:13:36.104 { 00:13:36.104 "name": "BaseBdev2", 00:13:36.104 "uuid": "5bea9959-8fc2-41d5-99eb-8c35a4957582", 00:13:36.104 "is_configured": true, 00:13:36.104 "data_offset": 0, 00:13:36.104 "data_size": 65536 00:13:36.104 }, 00:13:36.104 { 00:13:36.104 "name": "BaseBdev3", 00:13:36.104 "uuid": "e5c0ca19-3321-4714-a7aa-6beecea51942", 00:13:36.104 "is_configured": true, 00:13:36.104 "data_offset": 0, 00:13:36.104 "data_size": 65536 00:13:36.104 } 00:13:36.104 ] 00:13:36.104 }' 00:13:36.104 01:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.104 01:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.363 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.363 [2024-10-09 01:33:35.244052] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.628 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.628 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:36.628 "name": "Existed_Raid", 00:13:36.628 "aliases": [ 00:13:36.628 "d65c786a-d840-4dd5-a7ec-647687b2ddd7" 00:13:36.628 ], 00:13:36.628 "product_name": "Raid Volume", 00:13:36.628 "block_size": 512, 00:13:36.628 "num_blocks": 131072, 00:13:36.628 "uuid": "d65c786a-d840-4dd5-a7ec-647687b2ddd7", 00:13:36.628 "assigned_rate_limits": { 00:13:36.628 "rw_ios_per_sec": 0, 00:13:36.628 "rw_mbytes_per_sec": 0, 00:13:36.628 "r_mbytes_per_sec": 0, 00:13:36.628 "w_mbytes_per_sec": 0 00:13:36.628 }, 00:13:36.628 "claimed": false, 00:13:36.628 "zoned": false, 00:13:36.628 "supported_io_types": { 00:13:36.628 "read": true, 00:13:36.628 "write": true, 00:13:36.628 "unmap": false, 00:13:36.628 "flush": false, 00:13:36.628 "reset": true, 00:13:36.628 "nvme_admin": false, 00:13:36.628 "nvme_io": false, 00:13:36.628 "nvme_io_md": false, 00:13:36.628 "write_zeroes": true, 00:13:36.628 "zcopy": false, 00:13:36.628 "get_zone_info": false, 00:13:36.628 "zone_management": false, 00:13:36.628 "zone_append": false, 00:13:36.628 "compare": false, 00:13:36.628 "compare_and_write": false, 00:13:36.628 "abort": false, 00:13:36.628 "seek_hole": false, 00:13:36.628 "seek_data": false, 00:13:36.628 "copy": false, 00:13:36.628 "nvme_iov_md": false 00:13:36.628 }, 00:13:36.628 "driver_specific": { 00:13:36.628 "raid": { 00:13:36.628 "uuid": "d65c786a-d840-4dd5-a7ec-647687b2ddd7", 00:13:36.628 "strip_size_kb": 64, 00:13:36.628 "state": "online", 00:13:36.628 "raid_level": "raid5f", 00:13:36.628 "superblock": false, 00:13:36.628 "num_base_bdevs": 3, 00:13:36.628 "num_base_bdevs_discovered": 3, 00:13:36.628 "num_base_bdevs_operational": 3, 00:13:36.628 "base_bdevs_list": [ 00:13:36.628 { 00:13:36.628 "name": "BaseBdev1", 00:13:36.628 "uuid": "cd45591b-6e15-4681-b49c-3b9c984da6cd", 00:13:36.628 "is_configured": true, 00:13:36.628 "data_offset": 0, 00:13:36.628 "data_size": 65536 00:13:36.628 }, 00:13:36.628 { 00:13:36.628 "name": "BaseBdev2", 00:13:36.628 "uuid": "5bea9959-8fc2-41d5-99eb-8c35a4957582", 00:13:36.628 "is_configured": true, 00:13:36.628 "data_offset": 0, 00:13:36.629 "data_size": 65536 00:13:36.629 }, 00:13:36.629 { 00:13:36.629 "name": "BaseBdev3", 00:13:36.629 "uuid": "e5c0ca19-3321-4714-a7aa-6beecea51942", 00:13:36.629 "is_configured": true, 00:13:36.629 "data_offset": 0, 00:13:36.629 "data_size": 65536 00:13:36.629 } 00:13:36.629 ] 00:13:36.629 } 00:13:36.629 } 00:13:36.629 }' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:36.629 BaseBdev2 00:13:36.629 BaseBdev3' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.629 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.629 [2024-10-09 01:33:35.512001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.905 "name": "Existed_Raid", 00:13:36.905 "uuid": "d65c786a-d840-4dd5-a7ec-647687b2ddd7", 00:13:36.905 "strip_size_kb": 64, 00:13:36.905 "state": "online", 00:13:36.905 "raid_level": "raid5f", 00:13:36.905 "superblock": false, 00:13:36.905 "num_base_bdevs": 3, 00:13:36.905 "num_base_bdevs_discovered": 2, 00:13:36.905 "num_base_bdevs_operational": 2, 00:13:36.905 "base_bdevs_list": [ 00:13:36.905 { 00:13:36.905 "name": null, 00:13:36.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.905 "is_configured": false, 00:13:36.905 "data_offset": 0, 00:13:36.905 "data_size": 65536 00:13:36.905 }, 00:13:36.905 { 00:13:36.905 "name": "BaseBdev2", 00:13:36.905 "uuid": "5bea9959-8fc2-41d5-99eb-8c35a4957582", 00:13:36.905 "is_configured": true, 00:13:36.905 "data_offset": 0, 00:13:36.905 "data_size": 65536 00:13:36.905 }, 00:13:36.905 { 00:13:36.905 "name": "BaseBdev3", 00:13:36.905 "uuid": "e5c0ca19-3321-4714-a7aa-6beecea51942", 00:13:36.905 "is_configured": true, 00:13:36.905 "data_offset": 0, 00:13:36.905 "data_size": 65536 00:13:36.905 } 00:13:36.905 ] 00:13:36.905 }' 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.905 01:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.178 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 [2024-10-09 01:33:36.074641] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.438 [2024-10-09 01:33:36.074755] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.438 [2024-10-09 01:33:36.095251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 [2024-10-09 01:33:36.155279] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:37.438 [2024-10-09 01:33:36.155334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 BaseBdev2 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 [ 00:13:37.438 { 00:13:37.438 "name": "BaseBdev2", 00:13:37.438 "aliases": [ 00:13:37.438 "f31ea46c-67ec-4318-9bc4-64005ac4b375" 00:13:37.438 ], 00:13:37.438 "product_name": "Malloc disk", 00:13:37.438 "block_size": 512, 00:13:37.438 "num_blocks": 65536, 00:13:37.438 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:37.438 "assigned_rate_limits": { 00:13:37.438 "rw_ios_per_sec": 0, 00:13:37.438 "rw_mbytes_per_sec": 0, 00:13:37.438 "r_mbytes_per_sec": 0, 00:13:37.438 "w_mbytes_per_sec": 0 00:13:37.438 }, 00:13:37.438 "claimed": false, 00:13:37.438 "zoned": false, 00:13:37.438 "supported_io_types": { 00:13:37.438 "read": true, 00:13:37.438 "write": true, 00:13:37.438 "unmap": true, 00:13:37.438 "flush": true, 00:13:37.438 "reset": true, 00:13:37.438 "nvme_admin": false, 00:13:37.438 "nvme_io": false, 00:13:37.438 "nvme_io_md": false, 00:13:37.438 "write_zeroes": true, 00:13:37.438 "zcopy": true, 00:13:37.438 "get_zone_info": false, 00:13:37.438 "zone_management": false, 00:13:37.438 "zone_append": false, 00:13:37.438 "compare": false, 00:13:37.438 "compare_and_write": false, 00:13:37.438 "abort": true, 00:13:37.438 "seek_hole": false, 00:13:37.438 "seek_data": false, 00:13:37.438 "copy": true, 00:13:37.438 "nvme_iov_md": false 00:13:37.438 }, 00:13:37.438 "memory_domains": [ 00:13:37.438 { 00:13:37.438 "dma_device_id": "system", 00:13:37.438 "dma_device_type": 1 00:13:37.438 }, 00:13:37.438 { 00:13:37.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.438 "dma_device_type": 2 00:13:37.438 } 00:13:37.438 ], 00:13:37.438 "driver_specific": {} 00:13:37.438 } 00:13:37.438 ] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 BaseBdev3 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.438 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.698 [ 00:13:37.698 { 00:13:37.698 "name": "BaseBdev3", 00:13:37.698 "aliases": [ 00:13:37.698 "727bfc40-e7eb-46be-8eb2-d266bdefd395" 00:13:37.698 ], 00:13:37.698 "product_name": "Malloc disk", 00:13:37.698 "block_size": 512, 00:13:37.698 "num_blocks": 65536, 00:13:37.698 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:37.698 "assigned_rate_limits": { 00:13:37.698 "rw_ios_per_sec": 0, 00:13:37.698 "rw_mbytes_per_sec": 0, 00:13:37.698 "r_mbytes_per_sec": 0, 00:13:37.698 "w_mbytes_per_sec": 0 00:13:37.698 }, 00:13:37.698 "claimed": false, 00:13:37.698 "zoned": false, 00:13:37.698 "supported_io_types": { 00:13:37.698 "read": true, 00:13:37.698 "write": true, 00:13:37.698 "unmap": true, 00:13:37.698 "flush": true, 00:13:37.698 "reset": true, 00:13:37.698 "nvme_admin": false, 00:13:37.698 "nvme_io": false, 00:13:37.698 "nvme_io_md": false, 00:13:37.698 "write_zeroes": true, 00:13:37.698 "zcopy": true, 00:13:37.698 "get_zone_info": false, 00:13:37.698 "zone_management": false, 00:13:37.698 "zone_append": false, 00:13:37.698 "compare": false, 00:13:37.698 "compare_and_write": false, 00:13:37.698 "abort": true, 00:13:37.698 "seek_hole": false, 00:13:37.698 "seek_data": false, 00:13:37.698 "copy": true, 00:13:37.698 "nvme_iov_md": false 00:13:37.698 }, 00:13:37.698 "memory_domains": [ 00:13:37.698 { 00:13:37.698 "dma_device_id": "system", 00:13:37.698 "dma_device_type": 1 00:13:37.698 }, 00:13:37.698 { 00:13:37.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.698 "dma_device_type": 2 00:13:37.698 } 00:13:37.698 ], 00:13:37.698 "driver_specific": {} 00:13:37.698 } 00:13:37.698 ] 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.698 [2024-10-09 01:33:36.347428] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:37.698 [2024-10-09 01:33:36.347546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:37.698 [2024-10-09 01:33:36.347589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:37.698 [2024-10-09 01:33:36.349736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.698 "name": "Existed_Raid", 00:13:37.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.698 "strip_size_kb": 64, 00:13:37.698 "state": "configuring", 00:13:37.698 "raid_level": "raid5f", 00:13:37.698 "superblock": false, 00:13:37.698 "num_base_bdevs": 3, 00:13:37.698 "num_base_bdevs_discovered": 2, 00:13:37.698 "num_base_bdevs_operational": 3, 00:13:37.698 "base_bdevs_list": [ 00:13:37.698 { 00:13:37.698 "name": "BaseBdev1", 00:13:37.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.698 "is_configured": false, 00:13:37.698 "data_offset": 0, 00:13:37.698 "data_size": 0 00:13:37.698 }, 00:13:37.698 { 00:13:37.698 "name": "BaseBdev2", 00:13:37.698 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:37.698 "is_configured": true, 00:13:37.698 "data_offset": 0, 00:13:37.698 "data_size": 65536 00:13:37.698 }, 00:13:37.698 { 00:13:37.698 "name": "BaseBdev3", 00:13:37.698 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:37.698 "is_configured": true, 00:13:37.698 "data_offset": 0, 00:13:37.698 "data_size": 65536 00:13:37.698 } 00:13:37.698 ] 00:13:37.698 }' 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.698 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.958 [2024-10-09 01:33:36.739498] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.958 "name": "Existed_Raid", 00:13:37.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.958 "strip_size_kb": 64, 00:13:37.958 "state": "configuring", 00:13:37.958 "raid_level": "raid5f", 00:13:37.958 "superblock": false, 00:13:37.958 "num_base_bdevs": 3, 00:13:37.958 "num_base_bdevs_discovered": 1, 00:13:37.958 "num_base_bdevs_operational": 3, 00:13:37.958 "base_bdevs_list": [ 00:13:37.958 { 00:13:37.958 "name": "BaseBdev1", 00:13:37.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.958 "is_configured": false, 00:13:37.958 "data_offset": 0, 00:13:37.958 "data_size": 0 00:13:37.958 }, 00:13:37.958 { 00:13:37.958 "name": null, 00:13:37.958 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:37.958 "is_configured": false, 00:13:37.958 "data_offset": 0, 00:13:37.958 "data_size": 65536 00:13:37.958 }, 00:13:37.958 { 00:13:37.958 "name": "BaseBdev3", 00:13:37.958 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:37.958 "is_configured": true, 00:13:37.958 "data_offset": 0, 00:13:37.958 "data_size": 65536 00:13:37.958 } 00:13:37.958 ] 00:13:37.958 }' 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.958 01:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.527 [2024-10-09 01:33:37.200202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:38.527 BaseBdev1 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.527 [ 00:13:38.527 { 00:13:38.527 "name": "BaseBdev1", 00:13:38.527 "aliases": [ 00:13:38.527 "4cde3603-447d-445d-a107-7e05c013f9c5" 00:13:38.527 ], 00:13:38.527 "product_name": "Malloc disk", 00:13:38.527 "block_size": 512, 00:13:38.527 "num_blocks": 65536, 00:13:38.527 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:38.527 "assigned_rate_limits": { 00:13:38.527 "rw_ios_per_sec": 0, 00:13:38.527 "rw_mbytes_per_sec": 0, 00:13:38.527 "r_mbytes_per_sec": 0, 00:13:38.527 "w_mbytes_per_sec": 0 00:13:38.527 }, 00:13:38.527 "claimed": true, 00:13:38.527 "claim_type": "exclusive_write", 00:13:38.527 "zoned": false, 00:13:38.527 "supported_io_types": { 00:13:38.527 "read": true, 00:13:38.527 "write": true, 00:13:38.527 "unmap": true, 00:13:38.527 "flush": true, 00:13:38.527 "reset": true, 00:13:38.527 "nvme_admin": false, 00:13:38.527 "nvme_io": false, 00:13:38.527 "nvme_io_md": false, 00:13:38.527 "write_zeroes": true, 00:13:38.527 "zcopy": true, 00:13:38.527 "get_zone_info": false, 00:13:38.527 "zone_management": false, 00:13:38.527 "zone_append": false, 00:13:38.527 "compare": false, 00:13:38.527 "compare_and_write": false, 00:13:38.527 "abort": true, 00:13:38.527 "seek_hole": false, 00:13:38.527 "seek_data": false, 00:13:38.527 "copy": true, 00:13:38.527 "nvme_iov_md": false 00:13:38.527 }, 00:13:38.527 "memory_domains": [ 00:13:38.527 { 00:13:38.527 "dma_device_id": "system", 00:13:38.527 "dma_device_type": 1 00:13:38.527 }, 00:13:38.527 { 00:13:38.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.527 "dma_device_type": 2 00:13:38.527 } 00:13:38.527 ], 00:13:38.527 "driver_specific": {} 00:13:38.527 } 00:13:38.527 ] 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.527 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.527 "name": "Existed_Raid", 00:13:38.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.527 "strip_size_kb": 64, 00:13:38.527 "state": "configuring", 00:13:38.527 "raid_level": "raid5f", 00:13:38.527 "superblock": false, 00:13:38.527 "num_base_bdevs": 3, 00:13:38.527 "num_base_bdevs_discovered": 2, 00:13:38.527 "num_base_bdevs_operational": 3, 00:13:38.527 "base_bdevs_list": [ 00:13:38.527 { 00:13:38.527 "name": "BaseBdev1", 00:13:38.527 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:38.527 "is_configured": true, 00:13:38.527 "data_offset": 0, 00:13:38.527 "data_size": 65536 00:13:38.527 }, 00:13:38.527 { 00:13:38.527 "name": null, 00:13:38.527 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:38.528 "is_configured": false, 00:13:38.528 "data_offset": 0, 00:13:38.528 "data_size": 65536 00:13:38.528 }, 00:13:38.528 { 00:13:38.528 "name": "BaseBdev3", 00:13:38.528 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:38.528 "is_configured": true, 00:13:38.528 "data_offset": 0, 00:13:38.528 "data_size": 65536 00:13:38.528 } 00:13:38.528 ] 00:13:38.528 }' 00:13:38.528 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.528 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.096 [2024-10-09 01:33:37.740371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.096 "name": "Existed_Raid", 00:13:39.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.096 "strip_size_kb": 64, 00:13:39.096 "state": "configuring", 00:13:39.096 "raid_level": "raid5f", 00:13:39.096 "superblock": false, 00:13:39.096 "num_base_bdevs": 3, 00:13:39.096 "num_base_bdevs_discovered": 1, 00:13:39.096 "num_base_bdevs_operational": 3, 00:13:39.096 "base_bdevs_list": [ 00:13:39.096 { 00:13:39.096 "name": "BaseBdev1", 00:13:39.096 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:39.096 "is_configured": true, 00:13:39.096 "data_offset": 0, 00:13:39.096 "data_size": 65536 00:13:39.096 }, 00:13:39.096 { 00:13:39.096 "name": null, 00:13:39.096 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:39.096 "is_configured": false, 00:13:39.096 "data_offset": 0, 00:13:39.096 "data_size": 65536 00:13:39.096 }, 00:13:39.096 { 00:13:39.096 "name": null, 00:13:39.096 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:39.096 "is_configured": false, 00:13:39.096 "data_offset": 0, 00:13:39.096 "data_size": 65536 00:13:39.096 } 00:13:39.096 ] 00:13:39.096 }' 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.096 01:33:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.355 [2024-10-09 01:33:38.224515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.355 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.356 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.356 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.356 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.356 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.356 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.356 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.356 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.356 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.613 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.613 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.613 "name": "Existed_Raid", 00:13:39.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.613 "strip_size_kb": 64, 00:13:39.613 "state": "configuring", 00:13:39.613 "raid_level": "raid5f", 00:13:39.613 "superblock": false, 00:13:39.613 "num_base_bdevs": 3, 00:13:39.613 "num_base_bdevs_discovered": 2, 00:13:39.613 "num_base_bdevs_operational": 3, 00:13:39.613 "base_bdevs_list": [ 00:13:39.613 { 00:13:39.613 "name": "BaseBdev1", 00:13:39.613 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:39.613 "is_configured": true, 00:13:39.613 "data_offset": 0, 00:13:39.613 "data_size": 65536 00:13:39.613 }, 00:13:39.613 { 00:13:39.613 "name": null, 00:13:39.613 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:39.613 "is_configured": false, 00:13:39.613 "data_offset": 0, 00:13:39.613 "data_size": 65536 00:13:39.613 }, 00:13:39.613 { 00:13:39.613 "name": "BaseBdev3", 00:13:39.613 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:39.613 "is_configured": true, 00:13:39.613 "data_offset": 0, 00:13:39.613 "data_size": 65536 00:13:39.613 } 00:13:39.613 ] 00:13:39.613 }' 00:13:39.613 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.613 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.872 [2024-10-09 01:33:38.664697] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.872 "name": "Existed_Raid", 00:13:39.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.872 "strip_size_kb": 64, 00:13:39.872 "state": "configuring", 00:13:39.872 "raid_level": "raid5f", 00:13:39.872 "superblock": false, 00:13:39.872 "num_base_bdevs": 3, 00:13:39.872 "num_base_bdevs_discovered": 1, 00:13:39.872 "num_base_bdevs_operational": 3, 00:13:39.872 "base_bdevs_list": [ 00:13:39.872 { 00:13:39.872 "name": null, 00:13:39.872 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:39.872 "is_configured": false, 00:13:39.872 "data_offset": 0, 00:13:39.872 "data_size": 65536 00:13:39.872 }, 00:13:39.872 { 00:13:39.872 "name": null, 00:13:39.872 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:39.872 "is_configured": false, 00:13:39.872 "data_offset": 0, 00:13:39.872 "data_size": 65536 00:13:39.872 }, 00:13:39.872 { 00:13:39.872 "name": "BaseBdev3", 00:13:39.872 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:39.872 "is_configured": true, 00:13:39.872 "data_offset": 0, 00:13:39.872 "data_size": 65536 00:13:39.872 } 00:13:39.872 ] 00:13:39.872 }' 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.872 01:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.441 [2024-10-09 01:33:39.152920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.441 "name": "Existed_Raid", 00:13:40.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.441 "strip_size_kb": 64, 00:13:40.441 "state": "configuring", 00:13:40.441 "raid_level": "raid5f", 00:13:40.441 "superblock": false, 00:13:40.441 "num_base_bdevs": 3, 00:13:40.441 "num_base_bdevs_discovered": 2, 00:13:40.441 "num_base_bdevs_operational": 3, 00:13:40.441 "base_bdevs_list": [ 00:13:40.441 { 00:13:40.441 "name": null, 00:13:40.441 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:40.441 "is_configured": false, 00:13:40.441 "data_offset": 0, 00:13:40.441 "data_size": 65536 00:13:40.441 }, 00:13:40.441 { 00:13:40.441 "name": "BaseBdev2", 00:13:40.441 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:40.441 "is_configured": true, 00:13:40.441 "data_offset": 0, 00:13:40.441 "data_size": 65536 00:13:40.441 }, 00:13:40.441 { 00:13:40.441 "name": "BaseBdev3", 00:13:40.441 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:40.441 "is_configured": true, 00:13:40.441 "data_offset": 0, 00:13:40.441 "data_size": 65536 00:13:40.441 } 00:13:40.441 ] 00:13:40.441 }' 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.441 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4cde3603-447d-445d-a107-7e05c013f9c5 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.010 [2024-10-09 01:33:39.713375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:41.010 [2024-10-09 01:33:39.713431] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:41.010 [2024-10-09 01:33:39.713439] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:41.010 [2024-10-09 01:33:39.713810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:41.010 [2024-10-09 01:33:39.714327] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:41.010 [2024-10-09 01:33:39.714380] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:13:41.010 id_bdev 0x617000008200 00:13:41.010 [2024-10-09 01:33:39.714609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.010 [ 00:13:41.010 { 00:13:41.010 "name": "NewBaseBdev", 00:13:41.010 "aliases": [ 00:13:41.010 "4cde3603-447d-445d-a107-7e05c013f9c5" 00:13:41.010 ], 00:13:41.010 "product_name": "Malloc disk", 00:13:41.010 "block_size": 512, 00:13:41.010 "num_blocks": 65536, 00:13:41.010 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:41.010 "assigned_rate_limits": { 00:13:41.010 "rw_ios_per_sec": 0, 00:13:41.010 "rw_mbytes_per_sec": 0, 00:13:41.010 "r_mbytes_per_sec": 0, 00:13:41.010 "w_mbytes_per_sec": 0 00:13:41.010 }, 00:13:41.010 "claimed": true, 00:13:41.010 "claim_type": "exclusive_write", 00:13:41.010 "zoned": false, 00:13:41.010 "supported_io_types": { 00:13:41.010 "read": true, 00:13:41.010 "write": true, 00:13:41.010 "unmap": true, 00:13:41.010 "flush": true, 00:13:41.010 "reset": true, 00:13:41.010 "nvme_admin": false, 00:13:41.010 "nvme_io": false, 00:13:41.010 "nvme_io_md": false, 00:13:41.010 "write_zeroes": true, 00:13:41.010 "zcopy": true, 00:13:41.010 "get_zone_info": false, 00:13:41.010 "zone_management": false, 00:13:41.010 "zone_append": false, 00:13:41.010 "compare": false, 00:13:41.010 "compare_and_write": false, 00:13:41.010 "abort": true, 00:13:41.010 "seek_hole": false, 00:13:41.010 "seek_data": false, 00:13:41.010 "copy": true, 00:13:41.010 "nvme_iov_md": false 00:13:41.010 }, 00:13:41.010 "memory_domains": [ 00:13:41.010 { 00:13:41.010 "dma_device_id": "system", 00:13:41.010 "dma_device_type": 1 00:13:41.010 }, 00:13:41.010 { 00:13:41.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.010 "dma_device_type": 2 00:13:41.010 } 00:13:41.010 ], 00:13:41.010 "driver_specific": {} 00:13:41.010 } 00:13:41.010 ] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.010 "name": "Existed_Raid", 00:13:41.010 "uuid": "aa58e3bd-58d6-460b-bd78-01f94b969710", 00:13:41.010 "strip_size_kb": 64, 00:13:41.010 "state": "online", 00:13:41.010 "raid_level": "raid5f", 00:13:41.010 "superblock": false, 00:13:41.010 "num_base_bdevs": 3, 00:13:41.010 "num_base_bdevs_discovered": 3, 00:13:41.010 "num_base_bdevs_operational": 3, 00:13:41.010 "base_bdevs_list": [ 00:13:41.010 { 00:13:41.010 "name": "NewBaseBdev", 00:13:41.010 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:41.010 "is_configured": true, 00:13:41.010 "data_offset": 0, 00:13:41.010 "data_size": 65536 00:13:41.010 }, 00:13:41.010 { 00:13:41.010 "name": "BaseBdev2", 00:13:41.010 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:41.010 "is_configured": true, 00:13:41.010 "data_offset": 0, 00:13:41.010 "data_size": 65536 00:13:41.010 }, 00:13:41.010 { 00:13:41.010 "name": "BaseBdev3", 00:13:41.010 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:41.010 "is_configured": true, 00:13:41.010 "data_offset": 0, 00:13:41.010 "data_size": 65536 00:13:41.010 } 00:13:41.010 ] 00:13:41.010 }' 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.010 01:33:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.579 [2024-10-09 01:33:40.193740] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.579 "name": "Existed_Raid", 00:13:41.579 "aliases": [ 00:13:41.579 "aa58e3bd-58d6-460b-bd78-01f94b969710" 00:13:41.579 ], 00:13:41.579 "product_name": "Raid Volume", 00:13:41.579 "block_size": 512, 00:13:41.579 "num_blocks": 131072, 00:13:41.579 "uuid": "aa58e3bd-58d6-460b-bd78-01f94b969710", 00:13:41.579 "assigned_rate_limits": { 00:13:41.579 "rw_ios_per_sec": 0, 00:13:41.579 "rw_mbytes_per_sec": 0, 00:13:41.579 "r_mbytes_per_sec": 0, 00:13:41.579 "w_mbytes_per_sec": 0 00:13:41.579 }, 00:13:41.579 "claimed": false, 00:13:41.579 "zoned": false, 00:13:41.579 "supported_io_types": { 00:13:41.579 "read": true, 00:13:41.579 "write": true, 00:13:41.579 "unmap": false, 00:13:41.579 "flush": false, 00:13:41.579 "reset": true, 00:13:41.579 "nvme_admin": false, 00:13:41.579 "nvme_io": false, 00:13:41.579 "nvme_io_md": false, 00:13:41.579 "write_zeroes": true, 00:13:41.579 "zcopy": false, 00:13:41.579 "get_zone_info": false, 00:13:41.579 "zone_management": false, 00:13:41.579 "zone_append": false, 00:13:41.579 "compare": false, 00:13:41.579 "compare_and_write": false, 00:13:41.579 "abort": false, 00:13:41.579 "seek_hole": false, 00:13:41.579 "seek_data": false, 00:13:41.579 "copy": false, 00:13:41.579 "nvme_iov_md": false 00:13:41.579 }, 00:13:41.579 "driver_specific": { 00:13:41.579 "raid": { 00:13:41.579 "uuid": "aa58e3bd-58d6-460b-bd78-01f94b969710", 00:13:41.579 "strip_size_kb": 64, 00:13:41.579 "state": "online", 00:13:41.579 "raid_level": "raid5f", 00:13:41.579 "superblock": false, 00:13:41.579 "num_base_bdevs": 3, 00:13:41.579 "num_base_bdevs_discovered": 3, 00:13:41.579 "num_base_bdevs_operational": 3, 00:13:41.579 "base_bdevs_list": [ 00:13:41.579 { 00:13:41.579 "name": "NewBaseBdev", 00:13:41.579 "uuid": "4cde3603-447d-445d-a107-7e05c013f9c5", 00:13:41.579 "is_configured": true, 00:13:41.579 "data_offset": 0, 00:13:41.579 "data_size": 65536 00:13:41.579 }, 00:13:41.579 { 00:13:41.579 "name": "BaseBdev2", 00:13:41.579 "uuid": "f31ea46c-67ec-4318-9bc4-64005ac4b375", 00:13:41.579 "is_configured": true, 00:13:41.579 "data_offset": 0, 00:13:41.579 "data_size": 65536 00:13:41.579 }, 00:13:41.579 { 00:13:41.579 "name": "BaseBdev3", 00:13:41.579 "uuid": "727bfc40-e7eb-46be-8eb2-d266bdefd395", 00:13:41.579 "is_configured": true, 00:13:41.579 "data_offset": 0, 00:13:41.579 "data_size": 65536 00:13:41.579 } 00:13:41.579 ] 00:13:41.579 } 00:13:41.579 } 00:13:41.579 }' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:41.579 BaseBdev2 00:13:41.579 BaseBdev3' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.579 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.579 [2024-10-09 01:33:40.449583] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.579 [2024-10-09 01:33:40.449611] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.579 [2024-10-09 01:33:40.449685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.579 [2024-10-09 01:33:40.449970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.580 [2024-10-09 01:33:40.449985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:41.580 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.580 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 91595 00:13:41.580 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 91595 ']' 00:13:41.580 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 91595 00:13:41.580 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:41.580 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.580 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91595 00:13:41.839 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.839 killing process with pid 91595 00:13:41.839 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.839 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91595' 00:13:41.839 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 91595 00:13:41.839 [2024-10-09 01:33:40.498742] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.839 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 91595 00:13:41.839 [2024-10-09 01:33:40.555641] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.100 01:33:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:42.100 00:13:42.100 real 0m9.140s 00:13:42.100 user 0m15.279s 00:13:42.100 sys 0m2.013s 00:13:42.100 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.100 ************************************ 00:13:42.100 END TEST raid5f_state_function_test 00:13:42.100 ************************************ 00:13:42.100 01:33:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.100 01:33:40 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:42.100 01:33:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:42.100 01:33:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.100 01:33:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.360 ************************************ 00:13:42.360 START TEST raid5f_state_function_test_sb 00:13:42.360 ************************************ 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=92201 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92201' 00:13:42.360 Process raid pid: 92201 00:13:42.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 92201 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92201 ']' 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.360 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.360 [2024-10-09 01:33:41.100948] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:42.360 [2024-10-09 01:33:41.101148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.360 [2024-10-09 01:33:41.235181] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.620 [2024-10-09 01:33:41.264230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.620 [2024-10-09 01:33:41.334218] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.620 [2024-10-09 01:33:41.410118] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.620 [2024-10-09 01:33:41.410162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.189 [2024-10-09 01:33:41.918882] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.189 [2024-10-09 01:33:41.918942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.189 [2024-10-09 01:33:41.918958] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.189 [2024-10-09 01:33:41.918965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.189 [2024-10-09 01:33:41.918976] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.189 [2024-10-09 01:33:41.918983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.189 "name": "Existed_Raid", 00:13:43.189 "uuid": "586992f4-eb32-460e-8e5d-6b442190215b", 00:13:43.189 "strip_size_kb": 64, 00:13:43.189 "state": "configuring", 00:13:43.189 "raid_level": "raid5f", 00:13:43.189 "superblock": true, 00:13:43.189 "num_base_bdevs": 3, 00:13:43.189 "num_base_bdevs_discovered": 0, 00:13:43.189 "num_base_bdevs_operational": 3, 00:13:43.189 "base_bdevs_list": [ 00:13:43.189 { 00:13:43.189 "name": "BaseBdev1", 00:13:43.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.189 "is_configured": false, 00:13:43.189 "data_offset": 0, 00:13:43.189 "data_size": 0 00:13:43.189 }, 00:13:43.189 { 00:13:43.189 "name": "BaseBdev2", 00:13:43.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.189 "is_configured": false, 00:13:43.189 "data_offset": 0, 00:13:43.189 "data_size": 0 00:13:43.189 }, 00:13:43.189 { 00:13:43.189 "name": "BaseBdev3", 00:13:43.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.189 "is_configured": false, 00:13:43.189 "data_offset": 0, 00:13:43.189 "data_size": 0 00:13:43.189 } 00:13:43.189 ] 00:13:43.189 }' 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.189 01:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.757 [2024-10-09 01:33:42.366898] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:43.757 [2024-10-09 01:33:42.366987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.757 [2024-10-09 01:33:42.378905] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.757 [2024-10-09 01:33:42.378978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.757 [2024-10-09 01:33:42.379006] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.757 [2024-10-09 01:33:42.379025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.757 [2024-10-09 01:33:42.379044] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:43.757 [2024-10-09 01:33:42.379061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.757 [2024-10-09 01:33:42.405857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.757 BaseBdev1 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.757 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.757 [ 00:13:43.757 { 00:13:43.757 "name": "BaseBdev1", 00:13:43.757 "aliases": [ 00:13:43.757 "d894ba43-c834-4b10-bf95-171b54aa34e6" 00:13:43.757 ], 00:13:43.757 "product_name": "Malloc disk", 00:13:43.757 "block_size": 512, 00:13:43.757 "num_blocks": 65536, 00:13:43.757 "uuid": "d894ba43-c834-4b10-bf95-171b54aa34e6", 00:13:43.757 "assigned_rate_limits": { 00:13:43.757 "rw_ios_per_sec": 0, 00:13:43.757 "rw_mbytes_per_sec": 0, 00:13:43.757 "r_mbytes_per_sec": 0, 00:13:43.757 "w_mbytes_per_sec": 0 00:13:43.757 }, 00:13:43.757 "claimed": true, 00:13:43.757 "claim_type": "exclusive_write", 00:13:43.757 "zoned": false, 00:13:43.757 "supported_io_types": { 00:13:43.757 "read": true, 00:13:43.757 "write": true, 00:13:43.757 "unmap": true, 00:13:43.757 "flush": true, 00:13:43.757 "reset": true, 00:13:43.757 "nvme_admin": false, 00:13:43.757 "nvme_io": false, 00:13:43.757 "nvme_io_md": false, 00:13:43.757 "write_zeroes": true, 00:13:43.757 "zcopy": true, 00:13:43.757 "get_zone_info": false, 00:13:43.757 "zone_management": false, 00:13:43.757 "zone_append": false, 00:13:43.757 "compare": false, 00:13:43.757 "compare_and_write": false, 00:13:43.757 "abort": true, 00:13:43.757 "seek_hole": false, 00:13:43.757 "seek_data": false, 00:13:43.757 "copy": true, 00:13:43.757 "nvme_iov_md": false 00:13:43.757 }, 00:13:43.758 "memory_domains": [ 00:13:43.758 { 00:13:43.758 "dma_device_id": "system", 00:13:43.758 "dma_device_type": 1 00:13:43.758 }, 00:13:43.758 { 00:13:43.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.758 "dma_device_type": 2 00:13:43.758 } 00:13:43.758 ], 00:13:43.758 "driver_specific": {} 00:13:43.758 } 00:13:43.758 ] 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.758 "name": "Existed_Raid", 00:13:43.758 "uuid": "557a88b7-692f-48b3-8434-fe94fbe24e6f", 00:13:43.758 "strip_size_kb": 64, 00:13:43.758 "state": "configuring", 00:13:43.758 "raid_level": "raid5f", 00:13:43.758 "superblock": true, 00:13:43.758 "num_base_bdevs": 3, 00:13:43.758 "num_base_bdevs_discovered": 1, 00:13:43.758 "num_base_bdevs_operational": 3, 00:13:43.758 "base_bdevs_list": [ 00:13:43.758 { 00:13:43.758 "name": "BaseBdev1", 00:13:43.758 "uuid": "d894ba43-c834-4b10-bf95-171b54aa34e6", 00:13:43.758 "is_configured": true, 00:13:43.758 "data_offset": 2048, 00:13:43.758 "data_size": 63488 00:13:43.758 }, 00:13:43.758 { 00:13:43.758 "name": "BaseBdev2", 00:13:43.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.758 "is_configured": false, 00:13:43.758 "data_offset": 0, 00:13:43.758 "data_size": 0 00:13:43.758 }, 00:13:43.758 { 00:13:43.758 "name": "BaseBdev3", 00:13:43.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.758 "is_configured": false, 00:13:43.758 "data_offset": 0, 00:13:43.758 "data_size": 0 00:13:43.758 } 00:13:43.758 ] 00:13:43.758 }' 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.758 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.017 [2024-10-09 01:33:42.829975] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:44.017 [2024-10-09 01:33:42.830030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.017 [2024-10-09 01:33:42.841990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.017 [2024-10-09 01:33:42.843999] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:44.017 [2024-10-09 01:33:42.844068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:44.017 [2024-10-09 01:33:42.844098] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:44.017 [2024-10-09 01:33:42.844117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.017 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.017 "name": "Existed_Raid", 00:13:44.017 "uuid": "079ff518-1eb2-4798-8199-094d2c9a31f6", 00:13:44.017 "strip_size_kb": 64, 00:13:44.017 "state": "configuring", 00:13:44.017 "raid_level": "raid5f", 00:13:44.017 "superblock": true, 00:13:44.017 "num_base_bdevs": 3, 00:13:44.017 "num_base_bdevs_discovered": 1, 00:13:44.017 "num_base_bdevs_operational": 3, 00:13:44.017 "base_bdevs_list": [ 00:13:44.017 { 00:13:44.017 "name": "BaseBdev1", 00:13:44.017 "uuid": "d894ba43-c834-4b10-bf95-171b54aa34e6", 00:13:44.017 "is_configured": true, 00:13:44.017 "data_offset": 2048, 00:13:44.017 "data_size": 63488 00:13:44.017 }, 00:13:44.017 { 00:13:44.017 "name": "BaseBdev2", 00:13:44.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.017 "is_configured": false, 00:13:44.017 "data_offset": 0, 00:13:44.017 "data_size": 0 00:13:44.017 }, 00:13:44.017 { 00:13:44.017 "name": "BaseBdev3", 00:13:44.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.017 "is_configured": false, 00:13:44.017 "data_offset": 0, 00:13:44.017 "data_size": 0 00:13:44.017 } 00:13:44.017 ] 00:13:44.018 }' 00:13:44.018 01:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.018 01:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.586 [2024-10-09 01:33:43.292889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.586 BaseBdev2 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.586 [ 00:13:44.586 { 00:13:44.586 "name": "BaseBdev2", 00:13:44.586 "aliases": [ 00:13:44.586 "5f6683ff-c73d-4153-b5e2-dad302b97703" 00:13:44.586 ], 00:13:44.586 "product_name": "Malloc disk", 00:13:44.586 "block_size": 512, 00:13:44.586 "num_blocks": 65536, 00:13:44.586 "uuid": "5f6683ff-c73d-4153-b5e2-dad302b97703", 00:13:44.586 "assigned_rate_limits": { 00:13:44.586 "rw_ios_per_sec": 0, 00:13:44.586 "rw_mbytes_per_sec": 0, 00:13:44.586 "r_mbytes_per_sec": 0, 00:13:44.586 "w_mbytes_per_sec": 0 00:13:44.586 }, 00:13:44.586 "claimed": true, 00:13:44.586 "claim_type": "exclusive_write", 00:13:44.586 "zoned": false, 00:13:44.586 "supported_io_types": { 00:13:44.586 "read": true, 00:13:44.586 "write": true, 00:13:44.586 "unmap": true, 00:13:44.586 "flush": true, 00:13:44.586 "reset": true, 00:13:44.586 "nvme_admin": false, 00:13:44.586 "nvme_io": false, 00:13:44.586 "nvme_io_md": false, 00:13:44.586 "write_zeroes": true, 00:13:44.586 "zcopy": true, 00:13:44.586 "get_zone_info": false, 00:13:44.586 "zone_management": false, 00:13:44.586 "zone_append": false, 00:13:44.586 "compare": false, 00:13:44.586 "compare_and_write": false, 00:13:44.586 "abort": true, 00:13:44.586 "seek_hole": false, 00:13:44.586 "seek_data": false, 00:13:44.586 "copy": true, 00:13:44.586 "nvme_iov_md": false 00:13:44.586 }, 00:13:44.586 "memory_domains": [ 00:13:44.586 { 00:13:44.586 "dma_device_id": "system", 00:13:44.586 "dma_device_type": 1 00:13:44.586 }, 00:13:44.586 { 00:13:44.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.586 "dma_device_type": 2 00:13:44.586 } 00:13:44.586 ], 00:13:44.586 "driver_specific": {} 00:13:44.586 } 00:13:44.586 ] 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.586 "name": "Existed_Raid", 00:13:44.586 "uuid": "079ff518-1eb2-4798-8199-094d2c9a31f6", 00:13:44.586 "strip_size_kb": 64, 00:13:44.586 "state": "configuring", 00:13:44.586 "raid_level": "raid5f", 00:13:44.586 "superblock": true, 00:13:44.586 "num_base_bdevs": 3, 00:13:44.586 "num_base_bdevs_discovered": 2, 00:13:44.586 "num_base_bdevs_operational": 3, 00:13:44.586 "base_bdevs_list": [ 00:13:44.586 { 00:13:44.586 "name": "BaseBdev1", 00:13:44.586 "uuid": "d894ba43-c834-4b10-bf95-171b54aa34e6", 00:13:44.586 "is_configured": true, 00:13:44.586 "data_offset": 2048, 00:13:44.586 "data_size": 63488 00:13:44.586 }, 00:13:44.586 { 00:13:44.586 "name": "BaseBdev2", 00:13:44.586 "uuid": "5f6683ff-c73d-4153-b5e2-dad302b97703", 00:13:44.586 "is_configured": true, 00:13:44.586 "data_offset": 2048, 00:13:44.586 "data_size": 63488 00:13:44.586 }, 00:13:44.586 { 00:13:44.586 "name": "BaseBdev3", 00:13:44.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.586 "is_configured": false, 00:13:44.586 "data_offset": 0, 00:13:44.586 "data_size": 0 00:13:44.586 } 00:13:44.586 ] 00:13:44.586 }' 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.586 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.846 [2024-10-09 01:33:43.733696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.846 [2024-10-09 01:33:43.733935] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:44.846 [2024-10-09 01:33:43.733953] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:44.846 [2024-10-09 01:33:43.734288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:44.846 BaseBdev3 00:13:44.846 [2024-10-09 01:33:43.734773] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:44.846 [2024-10-09 01:33:43.734802] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:44.846 [2024-10-09 01:33:43.734927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:44.846 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.105 [ 00:13:45.105 { 00:13:45.105 "name": "BaseBdev3", 00:13:45.105 "aliases": [ 00:13:45.105 "8a4c5bdd-5997-4df3-b808-9e12862bc9c4" 00:13:45.105 ], 00:13:45.105 "product_name": "Malloc disk", 00:13:45.105 "block_size": 512, 00:13:45.105 "num_blocks": 65536, 00:13:45.105 "uuid": "8a4c5bdd-5997-4df3-b808-9e12862bc9c4", 00:13:45.105 "assigned_rate_limits": { 00:13:45.105 "rw_ios_per_sec": 0, 00:13:45.105 "rw_mbytes_per_sec": 0, 00:13:45.105 "r_mbytes_per_sec": 0, 00:13:45.105 "w_mbytes_per_sec": 0 00:13:45.105 }, 00:13:45.105 "claimed": true, 00:13:45.105 "claim_type": "exclusive_write", 00:13:45.105 "zoned": false, 00:13:45.105 "supported_io_types": { 00:13:45.105 "read": true, 00:13:45.105 "write": true, 00:13:45.105 "unmap": true, 00:13:45.105 "flush": true, 00:13:45.105 "reset": true, 00:13:45.105 "nvme_admin": false, 00:13:45.105 "nvme_io": false, 00:13:45.105 "nvme_io_md": false, 00:13:45.105 "write_zeroes": true, 00:13:45.105 "zcopy": true, 00:13:45.105 "get_zone_info": false, 00:13:45.105 "zone_management": false, 00:13:45.105 "zone_append": false, 00:13:45.105 "compare": false, 00:13:45.105 "compare_and_write": false, 00:13:45.105 "abort": true, 00:13:45.105 "seek_hole": false, 00:13:45.105 "seek_data": false, 00:13:45.105 "copy": true, 00:13:45.105 "nvme_iov_md": false 00:13:45.105 }, 00:13:45.105 "memory_domains": [ 00:13:45.105 { 00:13:45.105 "dma_device_id": "system", 00:13:45.105 "dma_device_type": 1 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.105 "dma_device_type": 2 00:13:45.105 } 00:13:45.105 ], 00:13:45.105 "driver_specific": {} 00:13:45.105 } 00:13:45.105 ] 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.105 "name": "Existed_Raid", 00:13:45.105 "uuid": "079ff518-1eb2-4798-8199-094d2c9a31f6", 00:13:45.105 "strip_size_kb": 64, 00:13:45.105 "state": "online", 00:13:45.105 "raid_level": "raid5f", 00:13:45.105 "superblock": true, 00:13:45.105 "num_base_bdevs": 3, 00:13:45.105 "num_base_bdevs_discovered": 3, 00:13:45.105 "num_base_bdevs_operational": 3, 00:13:45.105 "base_bdevs_list": [ 00:13:45.105 { 00:13:45.105 "name": "BaseBdev1", 00:13:45.105 "uuid": "d894ba43-c834-4b10-bf95-171b54aa34e6", 00:13:45.105 "is_configured": true, 00:13:45.105 "data_offset": 2048, 00:13:45.105 "data_size": 63488 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "name": "BaseBdev2", 00:13:45.105 "uuid": "5f6683ff-c73d-4153-b5e2-dad302b97703", 00:13:45.105 "is_configured": true, 00:13:45.105 "data_offset": 2048, 00:13:45.105 "data_size": 63488 00:13:45.105 }, 00:13:45.105 { 00:13:45.105 "name": "BaseBdev3", 00:13:45.105 "uuid": "8a4c5bdd-5997-4df3-b808-9e12862bc9c4", 00:13:45.105 "is_configured": true, 00:13:45.105 "data_offset": 2048, 00:13:45.105 "data_size": 63488 00:13:45.105 } 00:13:45.105 ] 00:13:45.105 }' 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.105 01:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.364 [2024-10-09 01:33:44.230034] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.364 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:45.623 "name": "Existed_Raid", 00:13:45.623 "aliases": [ 00:13:45.623 "079ff518-1eb2-4798-8199-094d2c9a31f6" 00:13:45.623 ], 00:13:45.623 "product_name": "Raid Volume", 00:13:45.623 "block_size": 512, 00:13:45.623 "num_blocks": 126976, 00:13:45.623 "uuid": "079ff518-1eb2-4798-8199-094d2c9a31f6", 00:13:45.623 "assigned_rate_limits": { 00:13:45.623 "rw_ios_per_sec": 0, 00:13:45.623 "rw_mbytes_per_sec": 0, 00:13:45.623 "r_mbytes_per_sec": 0, 00:13:45.623 "w_mbytes_per_sec": 0 00:13:45.623 }, 00:13:45.623 "claimed": false, 00:13:45.623 "zoned": false, 00:13:45.623 "supported_io_types": { 00:13:45.623 "read": true, 00:13:45.623 "write": true, 00:13:45.623 "unmap": false, 00:13:45.623 "flush": false, 00:13:45.623 "reset": true, 00:13:45.623 "nvme_admin": false, 00:13:45.623 "nvme_io": false, 00:13:45.623 "nvme_io_md": false, 00:13:45.623 "write_zeroes": true, 00:13:45.623 "zcopy": false, 00:13:45.623 "get_zone_info": false, 00:13:45.623 "zone_management": false, 00:13:45.623 "zone_append": false, 00:13:45.623 "compare": false, 00:13:45.623 "compare_and_write": false, 00:13:45.623 "abort": false, 00:13:45.623 "seek_hole": false, 00:13:45.623 "seek_data": false, 00:13:45.623 "copy": false, 00:13:45.623 "nvme_iov_md": false 00:13:45.623 }, 00:13:45.623 "driver_specific": { 00:13:45.623 "raid": { 00:13:45.623 "uuid": "079ff518-1eb2-4798-8199-094d2c9a31f6", 00:13:45.623 "strip_size_kb": 64, 00:13:45.623 "state": "online", 00:13:45.623 "raid_level": "raid5f", 00:13:45.623 "superblock": true, 00:13:45.623 "num_base_bdevs": 3, 00:13:45.623 "num_base_bdevs_discovered": 3, 00:13:45.623 "num_base_bdevs_operational": 3, 00:13:45.623 "base_bdevs_list": [ 00:13:45.623 { 00:13:45.623 "name": "BaseBdev1", 00:13:45.623 "uuid": "d894ba43-c834-4b10-bf95-171b54aa34e6", 00:13:45.623 "is_configured": true, 00:13:45.623 "data_offset": 2048, 00:13:45.623 "data_size": 63488 00:13:45.623 }, 00:13:45.623 { 00:13:45.623 "name": "BaseBdev2", 00:13:45.623 "uuid": "5f6683ff-c73d-4153-b5e2-dad302b97703", 00:13:45.623 "is_configured": true, 00:13:45.623 "data_offset": 2048, 00:13:45.623 "data_size": 63488 00:13:45.623 }, 00:13:45.623 { 00:13:45.623 "name": "BaseBdev3", 00:13:45.623 "uuid": "8a4c5bdd-5997-4df3-b808-9e12862bc9c4", 00:13:45.623 "is_configured": true, 00:13:45.623 "data_offset": 2048, 00:13:45.623 "data_size": 63488 00:13:45.623 } 00:13:45.623 ] 00:13:45.623 } 00:13:45.623 } 00:13:45.623 }' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:45.623 BaseBdev2 00:13:45.623 BaseBdev3' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:45.623 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.883 [2024-10-09 01:33:44.526010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.883 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.884 "name": "Existed_Raid", 00:13:45.884 "uuid": "079ff518-1eb2-4798-8199-094d2c9a31f6", 00:13:45.884 "strip_size_kb": 64, 00:13:45.884 "state": "online", 00:13:45.884 "raid_level": "raid5f", 00:13:45.884 "superblock": true, 00:13:45.884 "num_base_bdevs": 3, 00:13:45.884 "num_base_bdevs_discovered": 2, 00:13:45.884 "num_base_bdevs_operational": 2, 00:13:45.884 "base_bdevs_list": [ 00:13:45.884 { 00:13:45.884 "name": null, 00:13:45.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.884 "is_configured": false, 00:13:45.884 "data_offset": 0, 00:13:45.884 "data_size": 63488 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "name": "BaseBdev2", 00:13:45.884 "uuid": "5f6683ff-c73d-4153-b5e2-dad302b97703", 00:13:45.884 "is_configured": true, 00:13:45.884 "data_offset": 2048, 00:13:45.884 "data_size": 63488 00:13:45.884 }, 00:13:45.884 { 00:13:45.884 "name": "BaseBdev3", 00:13:45.884 "uuid": "8a4c5bdd-5997-4df3-b808-9e12862bc9c4", 00:13:45.884 "is_configured": true, 00:13:45.884 "data_offset": 2048, 00:13:45.884 "data_size": 63488 00:13:45.884 } 00:13:45.884 ] 00:13:45.884 }' 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.884 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.143 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:46.143 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:46.143 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.143 01:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:46.143 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.143 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.143 01:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.143 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:46.143 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:46.143 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:46.143 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.143 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.143 [2024-10-09 01:33:45.010712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:46.143 [2024-10-09 01:33:45.010857] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.143 [2024-10-09 01:33:45.031029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.143 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.143 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:46.143 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:46.403 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.403 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:46.403 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.403 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.403 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.403 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:46.403 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:46.403 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 [2024-10-09 01:33:45.091062] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:46.404 [2024-10-09 01:33:45.091122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 BaseBdev2 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 [ 00:13:46.404 { 00:13:46.404 "name": "BaseBdev2", 00:13:46.404 "aliases": [ 00:13:46.404 "5f977dee-4552-45e6-b281-e56e7db32b7a" 00:13:46.404 ], 00:13:46.404 "product_name": "Malloc disk", 00:13:46.404 "block_size": 512, 00:13:46.404 "num_blocks": 65536, 00:13:46.404 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:46.404 "assigned_rate_limits": { 00:13:46.404 "rw_ios_per_sec": 0, 00:13:46.404 "rw_mbytes_per_sec": 0, 00:13:46.404 "r_mbytes_per_sec": 0, 00:13:46.404 "w_mbytes_per_sec": 0 00:13:46.404 }, 00:13:46.404 "claimed": false, 00:13:46.404 "zoned": false, 00:13:46.404 "supported_io_types": { 00:13:46.404 "read": true, 00:13:46.404 "write": true, 00:13:46.404 "unmap": true, 00:13:46.404 "flush": true, 00:13:46.404 "reset": true, 00:13:46.404 "nvme_admin": false, 00:13:46.404 "nvme_io": false, 00:13:46.404 "nvme_io_md": false, 00:13:46.404 "write_zeroes": true, 00:13:46.404 "zcopy": true, 00:13:46.404 "get_zone_info": false, 00:13:46.404 "zone_management": false, 00:13:46.404 "zone_append": false, 00:13:46.404 "compare": false, 00:13:46.404 "compare_and_write": false, 00:13:46.404 "abort": true, 00:13:46.404 "seek_hole": false, 00:13:46.404 "seek_data": false, 00:13:46.404 "copy": true, 00:13:46.404 "nvme_iov_md": false 00:13:46.404 }, 00:13:46.404 "memory_domains": [ 00:13:46.404 { 00:13:46.404 "dma_device_id": "system", 00:13:46.404 "dma_device_type": 1 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.404 "dma_device_type": 2 00:13:46.404 } 00:13:46.404 ], 00:13:46.404 "driver_specific": {} 00:13:46.404 } 00:13:46.404 ] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 BaseBdev3 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 [ 00:13:46.404 { 00:13:46.404 "name": "BaseBdev3", 00:13:46.404 "aliases": [ 00:13:46.404 "914cbf39-b437-47c3-8142-6e9c8cd30791" 00:13:46.404 ], 00:13:46.404 "product_name": "Malloc disk", 00:13:46.404 "block_size": 512, 00:13:46.404 "num_blocks": 65536, 00:13:46.404 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:46.404 "assigned_rate_limits": { 00:13:46.404 "rw_ios_per_sec": 0, 00:13:46.404 "rw_mbytes_per_sec": 0, 00:13:46.404 "r_mbytes_per_sec": 0, 00:13:46.404 "w_mbytes_per_sec": 0 00:13:46.404 }, 00:13:46.404 "claimed": false, 00:13:46.404 "zoned": false, 00:13:46.404 "supported_io_types": { 00:13:46.404 "read": true, 00:13:46.404 "write": true, 00:13:46.404 "unmap": true, 00:13:46.404 "flush": true, 00:13:46.404 "reset": true, 00:13:46.404 "nvme_admin": false, 00:13:46.404 "nvme_io": false, 00:13:46.404 "nvme_io_md": false, 00:13:46.404 "write_zeroes": true, 00:13:46.404 "zcopy": true, 00:13:46.404 "get_zone_info": false, 00:13:46.404 "zone_management": false, 00:13:46.404 "zone_append": false, 00:13:46.404 "compare": false, 00:13:46.404 "compare_and_write": false, 00:13:46.404 "abort": true, 00:13:46.404 "seek_hole": false, 00:13:46.404 "seek_data": false, 00:13:46.404 "copy": true, 00:13:46.404 "nvme_iov_md": false 00:13:46.404 }, 00:13:46.404 "memory_domains": [ 00:13:46.404 { 00:13:46.404 "dma_device_id": "system", 00:13:46.404 "dma_device_type": 1 00:13:46.404 }, 00:13:46.404 { 00:13:46.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.404 "dma_device_type": 2 00:13:46.404 } 00:13:46.404 ], 00:13:46.404 "driver_specific": {} 00:13:46.404 } 00:13:46.404 ] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.404 [2024-10-09 01:33:45.283911] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.404 [2024-10-09 01:33:45.284032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.404 [2024-10-09 01:33:45.284071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.404 [2024-10-09 01:33:45.286180] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.404 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.405 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.405 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.405 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.664 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.664 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.664 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.664 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.664 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.664 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.664 "name": "Existed_Raid", 00:13:46.664 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:46.664 "strip_size_kb": 64, 00:13:46.664 "state": "configuring", 00:13:46.664 "raid_level": "raid5f", 00:13:46.664 "superblock": true, 00:13:46.664 "num_base_bdevs": 3, 00:13:46.664 "num_base_bdevs_discovered": 2, 00:13:46.664 "num_base_bdevs_operational": 3, 00:13:46.664 "base_bdevs_list": [ 00:13:46.664 { 00:13:46.664 "name": "BaseBdev1", 00:13:46.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.664 "is_configured": false, 00:13:46.664 "data_offset": 0, 00:13:46.664 "data_size": 0 00:13:46.664 }, 00:13:46.664 { 00:13:46.664 "name": "BaseBdev2", 00:13:46.664 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:46.664 "is_configured": true, 00:13:46.664 "data_offset": 2048, 00:13:46.664 "data_size": 63488 00:13:46.664 }, 00:13:46.664 { 00:13:46.664 "name": "BaseBdev3", 00:13:46.664 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:46.664 "is_configured": true, 00:13:46.664 "data_offset": 2048, 00:13:46.664 "data_size": 63488 00:13:46.664 } 00:13:46.664 ] 00:13:46.664 }' 00:13:46.664 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.664 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.924 [2024-10-09 01:33:45.723987] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.924 "name": "Existed_Raid", 00:13:46.924 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:46.924 "strip_size_kb": 64, 00:13:46.924 "state": "configuring", 00:13:46.924 "raid_level": "raid5f", 00:13:46.924 "superblock": true, 00:13:46.924 "num_base_bdevs": 3, 00:13:46.924 "num_base_bdevs_discovered": 1, 00:13:46.924 "num_base_bdevs_operational": 3, 00:13:46.924 "base_bdevs_list": [ 00:13:46.924 { 00:13:46.924 "name": "BaseBdev1", 00:13:46.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.924 "is_configured": false, 00:13:46.924 "data_offset": 0, 00:13:46.924 "data_size": 0 00:13:46.924 }, 00:13:46.924 { 00:13:46.924 "name": null, 00:13:46.924 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:46.924 "is_configured": false, 00:13:46.924 "data_offset": 0, 00:13:46.924 "data_size": 63488 00:13:46.924 }, 00:13:46.924 { 00:13:46.924 "name": "BaseBdev3", 00:13:46.924 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:46.924 "is_configured": true, 00:13:46.924 "data_offset": 2048, 00:13:46.924 "data_size": 63488 00:13:46.924 } 00:13:46.924 ] 00:13:46.924 }' 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.924 01:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.511 [2024-10-09 01:33:46.232813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.511 BaseBdev1 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:47.511 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.512 [ 00:13:47.512 { 00:13:47.512 "name": "BaseBdev1", 00:13:47.512 "aliases": [ 00:13:47.512 "84b207ed-9b16-4b9d-be46-3852f604bad3" 00:13:47.512 ], 00:13:47.512 "product_name": "Malloc disk", 00:13:47.512 "block_size": 512, 00:13:47.512 "num_blocks": 65536, 00:13:47.512 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:47.512 "assigned_rate_limits": { 00:13:47.512 "rw_ios_per_sec": 0, 00:13:47.512 "rw_mbytes_per_sec": 0, 00:13:47.512 "r_mbytes_per_sec": 0, 00:13:47.512 "w_mbytes_per_sec": 0 00:13:47.512 }, 00:13:47.512 "claimed": true, 00:13:47.512 "claim_type": "exclusive_write", 00:13:47.512 "zoned": false, 00:13:47.512 "supported_io_types": { 00:13:47.512 "read": true, 00:13:47.512 "write": true, 00:13:47.512 "unmap": true, 00:13:47.512 "flush": true, 00:13:47.512 "reset": true, 00:13:47.512 "nvme_admin": false, 00:13:47.512 "nvme_io": false, 00:13:47.512 "nvme_io_md": false, 00:13:47.512 "write_zeroes": true, 00:13:47.512 "zcopy": true, 00:13:47.512 "get_zone_info": false, 00:13:47.512 "zone_management": false, 00:13:47.512 "zone_append": false, 00:13:47.512 "compare": false, 00:13:47.512 "compare_and_write": false, 00:13:47.512 "abort": true, 00:13:47.512 "seek_hole": false, 00:13:47.512 "seek_data": false, 00:13:47.512 "copy": true, 00:13:47.512 "nvme_iov_md": false 00:13:47.512 }, 00:13:47.512 "memory_domains": [ 00:13:47.512 { 00:13:47.512 "dma_device_id": "system", 00:13:47.512 "dma_device_type": 1 00:13:47.512 }, 00:13:47.512 { 00:13:47.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.512 "dma_device_type": 2 00:13:47.512 } 00:13:47.512 ], 00:13:47.512 "driver_specific": {} 00:13:47.512 } 00:13:47.512 ] 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.512 "name": "Existed_Raid", 00:13:47.512 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:47.512 "strip_size_kb": 64, 00:13:47.512 "state": "configuring", 00:13:47.512 "raid_level": "raid5f", 00:13:47.512 "superblock": true, 00:13:47.512 "num_base_bdevs": 3, 00:13:47.512 "num_base_bdevs_discovered": 2, 00:13:47.512 "num_base_bdevs_operational": 3, 00:13:47.512 "base_bdevs_list": [ 00:13:47.512 { 00:13:47.512 "name": "BaseBdev1", 00:13:47.512 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:47.512 "is_configured": true, 00:13:47.512 "data_offset": 2048, 00:13:47.512 "data_size": 63488 00:13:47.512 }, 00:13:47.512 { 00:13:47.512 "name": null, 00:13:47.512 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:47.512 "is_configured": false, 00:13:47.512 "data_offset": 0, 00:13:47.512 "data_size": 63488 00:13:47.512 }, 00:13:47.512 { 00:13:47.512 "name": "BaseBdev3", 00:13:47.512 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:47.512 "is_configured": true, 00:13:47.512 "data_offset": 2048, 00:13:47.512 "data_size": 63488 00:13:47.512 } 00:13:47.512 ] 00:13:47.512 }' 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.512 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.080 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.081 [2024-10-09 01:33:46.748987] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.081 "name": "Existed_Raid", 00:13:48.081 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:48.081 "strip_size_kb": 64, 00:13:48.081 "state": "configuring", 00:13:48.081 "raid_level": "raid5f", 00:13:48.081 "superblock": true, 00:13:48.081 "num_base_bdevs": 3, 00:13:48.081 "num_base_bdevs_discovered": 1, 00:13:48.081 "num_base_bdevs_operational": 3, 00:13:48.081 "base_bdevs_list": [ 00:13:48.081 { 00:13:48.081 "name": "BaseBdev1", 00:13:48.081 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:48.081 "is_configured": true, 00:13:48.081 "data_offset": 2048, 00:13:48.081 "data_size": 63488 00:13:48.081 }, 00:13:48.081 { 00:13:48.081 "name": null, 00:13:48.081 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:48.081 "is_configured": false, 00:13:48.081 "data_offset": 0, 00:13:48.081 "data_size": 63488 00:13:48.081 }, 00:13:48.081 { 00:13:48.081 "name": null, 00:13:48.081 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:48.081 "is_configured": false, 00:13:48.081 "data_offset": 0, 00:13:48.081 "data_size": 63488 00:13:48.081 } 00:13:48.081 ] 00:13:48.081 }' 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.081 01:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.340 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:48.340 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.340 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.340 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.599 [2024-10-09 01:33:47.245128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.599 "name": "Existed_Raid", 00:13:48.599 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:48.599 "strip_size_kb": 64, 00:13:48.599 "state": "configuring", 00:13:48.599 "raid_level": "raid5f", 00:13:48.599 "superblock": true, 00:13:48.599 "num_base_bdevs": 3, 00:13:48.599 "num_base_bdevs_discovered": 2, 00:13:48.599 "num_base_bdevs_operational": 3, 00:13:48.599 "base_bdevs_list": [ 00:13:48.599 { 00:13:48.599 "name": "BaseBdev1", 00:13:48.599 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:48.599 "is_configured": true, 00:13:48.599 "data_offset": 2048, 00:13:48.599 "data_size": 63488 00:13:48.599 }, 00:13:48.599 { 00:13:48.599 "name": null, 00:13:48.599 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:48.599 "is_configured": false, 00:13:48.599 "data_offset": 0, 00:13:48.599 "data_size": 63488 00:13:48.599 }, 00:13:48.599 { 00:13:48.599 "name": "BaseBdev3", 00:13:48.599 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:48.599 "is_configured": true, 00:13:48.599 "data_offset": 2048, 00:13:48.599 "data_size": 63488 00:13:48.599 } 00:13:48.599 ] 00:13:48.599 }' 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.599 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.859 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.859 [2024-10-09 01:33:47.729303] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.117 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.117 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.117 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.117 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.117 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.117 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.117 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.117 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.118 "name": "Existed_Raid", 00:13:49.118 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:49.118 "strip_size_kb": 64, 00:13:49.118 "state": "configuring", 00:13:49.118 "raid_level": "raid5f", 00:13:49.118 "superblock": true, 00:13:49.118 "num_base_bdevs": 3, 00:13:49.118 "num_base_bdevs_discovered": 1, 00:13:49.118 "num_base_bdevs_operational": 3, 00:13:49.118 "base_bdevs_list": [ 00:13:49.118 { 00:13:49.118 "name": null, 00:13:49.118 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:49.118 "is_configured": false, 00:13:49.118 "data_offset": 0, 00:13:49.118 "data_size": 63488 00:13:49.118 }, 00:13:49.118 { 00:13:49.118 "name": null, 00:13:49.118 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:49.118 "is_configured": false, 00:13:49.118 "data_offset": 0, 00:13:49.118 "data_size": 63488 00:13:49.118 }, 00:13:49.118 { 00:13:49.118 "name": "BaseBdev3", 00:13:49.118 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:49.118 "is_configured": true, 00:13:49.118 "data_offset": 2048, 00:13:49.118 "data_size": 63488 00:13:49.118 } 00:13:49.118 ] 00:13:49.118 }' 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.118 01:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.377 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.635 [2024-10-09 01:33:48.269759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.635 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.635 "name": "Existed_Raid", 00:13:49.635 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:49.635 "strip_size_kb": 64, 00:13:49.635 "state": "configuring", 00:13:49.635 "raid_level": "raid5f", 00:13:49.635 "superblock": true, 00:13:49.635 "num_base_bdevs": 3, 00:13:49.635 "num_base_bdevs_discovered": 2, 00:13:49.635 "num_base_bdevs_operational": 3, 00:13:49.635 "base_bdevs_list": [ 00:13:49.635 { 00:13:49.635 "name": null, 00:13:49.635 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:49.635 "is_configured": false, 00:13:49.635 "data_offset": 0, 00:13:49.635 "data_size": 63488 00:13:49.635 }, 00:13:49.636 { 00:13:49.636 "name": "BaseBdev2", 00:13:49.636 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:49.636 "is_configured": true, 00:13:49.636 "data_offset": 2048, 00:13:49.636 "data_size": 63488 00:13:49.636 }, 00:13:49.636 { 00:13:49.636 "name": "BaseBdev3", 00:13:49.636 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:49.636 "is_configured": true, 00:13:49.636 "data_offset": 2048, 00:13:49.636 "data_size": 63488 00:13:49.636 } 00:13:49.636 ] 00:13:49.636 }' 00:13:49.636 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.636 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 84b207ed-9b16-4b9d-be46-3852f604bad3 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.895 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.155 [2024-10-09 01:33:48.794578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:50.155 [2024-10-09 01:33:48.794766] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:50.155 [2024-10-09 01:33:48.794779] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:50.155 [2024-10-09 01:33:48.795047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:50.155 NewBaseBdev 00:13:50.155 [2024-10-09 01:33:48.795483] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:50.155 [2024-10-09 01:33:48.795500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:50.155 [2024-10-09 01:33:48.795628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.155 [ 00:13:50.155 { 00:13:50.155 "name": "NewBaseBdev", 00:13:50.155 "aliases": [ 00:13:50.155 "84b207ed-9b16-4b9d-be46-3852f604bad3" 00:13:50.155 ], 00:13:50.155 "product_name": "Malloc disk", 00:13:50.155 "block_size": 512, 00:13:50.155 "num_blocks": 65536, 00:13:50.155 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:50.155 "assigned_rate_limits": { 00:13:50.155 "rw_ios_per_sec": 0, 00:13:50.155 "rw_mbytes_per_sec": 0, 00:13:50.155 "r_mbytes_per_sec": 0, 00:13:50.155 "w_mbytes_per_sec": 0 00:13:50.155 }, 00:13:50.155 "claimed": true, 00:13:50.155 "claim_type": "exclusive_write", 00:13:50.155 "zoned": false, 00:13:50.155 "supported_io_types": { 00:13:50.155 "read": true, 00:13:50.155 "write": true, 00:13:50.155 "unmap": true, 00:13:50.155 "flush": true, 00:13:50.155 "reset": true, 00:13:50.155 "nvme_admin": false, 00:13:50.155 "nvme_io": false, 00:13:50.155 "nvme_io_md": false, 00:13:50.155 "write_zeroes": true, 00:13:50.155 "zcopy": true, 00:13:50.155 "get_zone_info": false, 00:13:50.155 "zone_management": false, 00:13:50.155 "zone_append": false, 00:13:50.155 "compare": false, 00:13:50.155 "compare_and_write": false, 00:13:50.155 "abort": true, 00:13:50.155 "seek_hole": false, 00:13:50.155 "seek_data": false, 00:13:50.155 "copy": true, 00:13:50.155 "nvme_iov_md": false 00:13:50.155 }, 00:13:50.155 "memory_domains": [ 00:13:50.155 { 00:13:50.155 "dma_device_id": "system", 00:13:50.155 "dma_device_type": 1 00:13:50.155 }, 00:13:50.155 { 00:13:50.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.155 "dma_device_type": 2 00:13:50.155 } 00:13:50.155 ], 00:13:50.155 "driver_specific": {} 00:13:50.155 } 00:13:50.155 ] 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.155 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.155 "name": "Existed_Raid", 00:13:50.155 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:50.155 "strip_size_kb": 64, 00:13:50.155 "state": "online", 00:13:50.155 "raid_level": "raid5f", 00:13:50.155 "superblock": true, 00:13:50.155 "num_base_bdevs": 3, 00:13:50.155 "num_base_bdevs_discovered": 3, 00:13:50.156 "num_base_bdevs_operational": 3, 00:13:50.156 "base_bdevs_list": [ 00:13:50.156 { 00:13:50.156 "name": "NewBaseBdev", 00:13:50.156 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:50.156 "is_configured": true, 00:13:50.156 "data_offset": 2048, 00:13:50.156 "data_size": 63488 00:13:50.156 }, 00:13:50.156 { 00:13:50.156 "name": "BaseBdev2", 00:13:50.156 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:50.156 "is_configured": true, 00:13:50.156 "data_offset": 2048, 00:13:50.156 "data_size": 63488 00:13:50.156 }, 00:13:50.156 { 00:13:50.156 "name": "BaseBdev3", 00:13:50.156 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:50.156 "is_configured": true, 00:13:50.156 "data_offset": 2048, 00:13:50.156 "data_size": 63488 00:13:50.156 } 00:13:50.156 ] 00:13:50.156 }' 00:13:50.156 01:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.156 01:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:50.415 [2024-10-09 01:33:49.254927] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:50.415 "name": "Existed_Raid", 00:13:50.415 "aliases": [ 00:13:50.415 "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc" 00:13:50.415 ], 00:13:50.415 "product_name": "Raid Volume", 00:13:50.415 "block_size": 512, 00:13:50.415 "num_blocks": 126976, 00:13:50.415 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:50.415 "assigned_rate_limits": { 00:13:50.415 "rw_ios_per_sec": 0, 00:13:50.415 "rw_mbytes_per_sec": 0, 00:13:50.415 "r_mbytes_per_sec": 0, 00:13:50.415 "w_mbytes_per_sec": 0 00:13:50.415 }, 00:13:50.415 "claimed": false, 00:13:50.415 "zoned": false, 00:13:50.415 "supported_io_types": { 00:13:50.415 "read": true, 00:13:50.415 "write": true, 00:13:50.415 "unmap": false, 00:13:50.415 "flush": false, 00:13:50.415 "reset": true, 00:13:50.415 "nvme_admin": false, 00:13:50.415 "nvme_io": false, 00:13:50.415 "nvme_io_md": false, 00:13:50.415 "write_zeroes": true, 00:13:50.415 "zcopy": false, 00:13:50.415 "get_zone_info": false, 00:13:50.415 "zone_management": false, 00:13:50.415 "zone_append": false, 00:13:50.415 "compare": false, 00:13:50.415 "compare_and_write": false, 00:13:50.415 "abort": false, 00:13:50.415 "seek_hole": false, 00:13:50.415 "seek_data": false, 00:13:50.415 "copy": false, 00:13:50.415 "nvme_iov_md": false 00:13:50.415 }, 00:13:50.415 "driver_specific": { 00:13:50.415 "raid": { 00:13:50.415 "uuid": "30df5246-9ee9-42ac-a0ea-cab58ab3f3dc", 00:13:50.415 "strip_size_kb": 64, 00:13:50.415 "state": "online", 00:13:50.415 "raid_level": "raid5f", 00:13:50.415 "superblock": true, 00:13:50.415 "num_base_bdevs": 3, 00:13:50.415 "num_base_bdevs_discovered": 3, 00:13:50.415 "num_base_bdevs_operational": 3, 00:13:50.415 "base_bdevs_list": [ 00:13:50.415 { 00:13:50.415 "name": "NewBaseBdev", 00:13:50.415 "uuid": "84b207ed-9b16-4b9d-be46-3852f604bad3", 00:13:50.415 "is_configured": true, 00:13:50.415 "data_offset": 2048, 00:13:50.415 "data_size": 63488 00:13:50.415 }, 00:13:50.415 { 00:13:50.415 "name": "BaseBdev2", 00:13:50.415 "uuid": "5f977dee-4552-45e6-b281-e56e7db32b7a", 00:13:50.415 "is_configured": true, 00:13:50.415 "data_offset": 2048, 00:13:50.415 "data_size": 63488 00:13:50.415 }, 00:13:50.415 { 00:13:50.415 "name": "BaseBdev3", 00:13:50.415 "uuid": "914cbf39-b437-47c3-8142-6e9c8cd30791", 00:13:50.415 "is_configured": true, 00:13:50.415 "data_offset": 2048, 00:13:50.415 "data_size": 63488 00:13:50.415 } 00:13:50.415 ] 00:13:50.415 } 00:13:50.415 } 00:13:50.415 }' 00:13:50.415 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:50.675 BaseBdev2 00:13:50.675 BaseBdev3' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.675 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.675 [2024-10-09 01:33:49.538793] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.675 [2024-10-09 01:33:49.538821] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.675 [2024-10-09 01:33:49.538887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.675 [2024-10-09 01:33:49.539158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.676 [2024-10-09 01:33:49.539168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:50.676 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.676 01:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 92201 00:13:50.676 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92201 ']' 00:13:50.676 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 92201 00:13:50.676 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:50.676 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.676 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92201 00:13:50.967 killing process with pid 92201 00:13:50.967 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.967 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.967 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92201' 00:13:50.967 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 92201 00:13:50.967 [2024-10-09 01:33:49.581000] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.967 01:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 92201 00:13:50.967 [2024-10-09 01:33:49.638816] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.237 01:33:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:51.237 00:13:51.237 real 0m9.016s 00:13:51.237 user 0m15.005s 00:13:51.237 sys 0m2.015s 00:13:51.237 01:33:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.237 ************************************ 00:13:51.237 END TEST raid5f_state_function_test_sb 00:13:51.237 ************************************ 00:13:51.237 01:33:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.237 01:33:50 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:51.237 01:33:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:51.237 01:33:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.237 01:33:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.237 ************************************ 00:13:51.237 START TEST raid5f_superblock_test 00:13:51.237 ************************************ 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=92810 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 92810 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 92810 ']' 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.237 01:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.497 [2024-10-09 01:33:50.184779] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:51.497 [2024-10-09 01:33:50.184991] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92810 ] 00:13:51.497 [2024-10-09 01:33:50.316273] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.497 [2024-10-09 01:33:50.342895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.757 [2024-10-09 01:33:50.413769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.757 [2024-10-09 01:33:50.489551] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.757 [2024-10-09 01:33:50.489689] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.326 malloc1 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.326 [2024-10-09 01:33:51.040253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:52.326 [2024-10-09 01:33:51.040404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.326 [2024-10-09 01:33:51.040449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.326 [2024-10-09 01:33:51.040481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.326 [2024-10-09 01:33:51.042933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.326 [2024-10-09 01:33:51.043006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:52.326 pt1 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.326 malloc2 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.326 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.326 [2024-10-09 01:33:51.089991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:52.326 [2024-10-09 01:33:51.090059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.326 [2024-10-09 01:33:51.090084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.326 [2024-10-09 01:33:51.090097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.327 [2024-10-09 01:33:51.093270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.327 [2024-10-09 01:33:51.093374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:52.327 pt2 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.327 malloc3 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.327 [2024-10-09 01:33:51.124738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:52.327 [2024-10-09 01:33:51.124839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.327 [2024-10-09 01:33:51.124877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.327 [2024-10-09 01:33:51.124906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.327 [2024-10-09 01:33:51.127246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.327 [2024-10-09 01:33:51.127315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:52.327 pt3 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.327 [2024-10-09 01:33:51.136810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:52.327 [2024-10-09 01:33:51.138953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:52.327 [2024-10-09 01:33:51.139057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:52.327 [2024-10-09 01:33:51.139251] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:52.327 [2024-10-09 01:33:51.139296] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:52.327 [2024-10-09 01:33:51.139592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:52.327 [2024-10-09 01:33:51.140069] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:52.327 [2024-10-09 01:33:51.140115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:52.327 [2024-10-09 01:33:51.140288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.327 "name": "raid_bdev1", 00:13:52.327 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:52.327 "strip_size_kb": 64, 00:13:52.327 "state": "online", 00:13:52.327 "raid_level": "raid5f", 00:13:52.327 "superblock": true, 00:13:52.327 "num_base_bdevs": 3, 00:13:52.327 "num_base_bdevs_discovered": 3, 00:13:52.327 "num_base_bdevs_operational": 3, 00:13:52.327 "base_bdevs_list": [ 00:13:52.327 { 00:13:52.327 "name": "pt1", 00:13:52.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:52.327 "is_configured": true, 00:13:52.327 "data_offset": 2048, 00:13:52.327 "data_size": 63488 00:13:52.327 }, 00:13:52.327 { 00:13:52.327 "name": "pt2", 00:13:52.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.327 "is_configured": true, 00:13:52.327 "data_offset": 2048, 00:13:52.327 "data_size": 63488 00:13:52.327 }, 00:13:52.327 { 00:13:52.327 "name": "pt3", 00:13:52.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.327 "is_configured": true, 00:13:52.327 "data_offset": 2048, 00:13:52.327 "data_size": 63488 00:13:52.327 } 00:13:52.327 ] 00:13:52.327 }' 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.327 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:52.896 [2024-10-09 01:33:51.582817] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:52.896 "name": "raid_bdev1", 00:13:52.896 "aliases": [ 00:13:52.896 "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0" 00:13:52.896 ], 00:13:52.896 "product_name": "Raid Volume", 00:13:52.896 "block_size": 512, 00:13:52.896 "num_blocks": 126976, 00:13:52.896 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:52.896 "assigned_rate_limits": { 00:13:52.896 "rw_ios_per_sec": 0, 00:13:52.896 "rw_mbytes_per_sec": 0, 00:13:52.896 "r_mbytes_per_sec": 0, 00:13:52.896 "w_mbytes_per_sec": 0 00:13:52.896 }, 00:13:52.896 "claimed": false, 00:13:52.896 "zoned": false, 00:13:52.896 "supported_io_types": { 00:13:52.896 "read": true, 00:13:52.896 "write": true, 00:13:52.896 "unmap": false, 00:13:52.896 "flush": false, 00:13:52.896 "reset": true, 00:13:52.896 "nvme_admin": false, 00:13:52.896 "nvme_io": false, 00:13:52.896 "nvme_io_md": false, 00:13:52.896 "write_zeroes": true, 00:13:52.896 "zcopy": false, 00:13:52.896 "get_zone_info": false, 00:13:52.896 "zone_management": false, 00:13:52.896 "zone_append": false, 00:13:52.896 "compare": false, 00:13:52.896 "compare_and_write": false, 00:13:52.896 "abort": false, 00:13:52.896 "seek_hole": false, 00:13:52.896 "seek_data": false, 00:13:52.896 "copy": false, 00:13:52.896 "nvme_iov_md": false 00:13:52.896 }, 00:13:52.896 "driver_specific": { 00:13:52.896 "raid": { 00:13:52.896 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:52.896 "strip_size_kb": 64, 00:13:52.896 "state": "online", 00:13:52.896 "raid_level": "raid5f", 00:13:52.896 "superblock": true, 00:13:52.896 "num_base_bdevs": 3, 00:13:52.896 "num_base_bdevs_discovered": 3, 00:13:52.896 "num_base_bdevs_operational": 3, 00:13:52.896 "base_bdevs_list": [ 00:13:52.896 { 00:13:52.896 "name": "pt1", 00:13:52.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:52.896 "is_configured": true, 00:13:52.896 "data_offset": 2048, 00:13:52.896 "data_size": 63488 00:13:52.896 }, 00:13:52.896 { 00:13:52.896 "name": "pt2", 00:13:52.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.896 "is_configured": true, 00:13:52.896 "data_offset": 2048, 00:13:52.896 "data_size": 63488 00:13:52.896 }, 00:13:52.896 { 00:13:52.896 "name": "pt3", 00:13:52.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.896 "is_configured": true, 00:13:52.896 "data_offset": 2048, 00:13:52.896 "data_size": 63488 00:13:52.896 } 00:13:52.896 ] 00:13:52.896 } 00:13:52.896 } 00:13:52.896 }' 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:52.896 pt2 00:13:52.896 pt3' 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.896 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 [2024-10-09 01:33:51.878857] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b6fc5cce-fb45-48a7-80cb-6a294fd20ec0 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b6fc5cce-fb45-48a7-80cb-6a294fd20ec0 ']' 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 [2024-10-09 01:33:51.922711] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.156 [2024-10-09 01:33:51.922742] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.156 [2024-10-09 01:33:51.922814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.156 [2024-10-09 01:33:51.922899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.156 [2024-10-09 01:33:51.922909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.156 01:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.156 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:53.156 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.156 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.156 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.416 [2024-10-09 01:33:52.078795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:53.416 [2024-10-09 01:33:52.080954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:53.416 [2024-10-09 01:33:52.081045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:53.416 [2024-10-09 01:33:52.081093] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:53.416 [2024-10-09 01:33:52.081134] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:53.416 [2024-10-09 01:33:52.081152] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:53.416 [2024-10-09 01:33:52.081165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.416 [2024-10-09 01:33:52.081175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:53.416 request: 00:13:53.416 { 00:13:53.416 "name": "raid_bdev1", 00:13:53.416 "raid_level": "raid5f", 00:13:53.416 "base_bdevs": [ 00:13:53.416 "malloc1", 00:13:53.416 "malloc2", 00:13:53.416 "malloc3" 00:13:53.416 ], 00:13:53.416 "strip_size_kb": 64, 00:13:53.416 "superblock": false, 00:13:53.416 "method": "bdev_raid_create", 00:13:53.416 "req_id": 1 00:13:53.416 } 00:13:53.416 Got JSON-RPC error response 00:13:53.416 response: 00:13:53.416 { 00:13:53.416 "code": -17, 00:13:53.416 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:53.416 } 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.416 [2024-10-09 01:33:52.146786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:53.416 [2024-10-09 01:33:52.146869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.416 [2024-10-09 01:33:52.146901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:53.416 [2024-10-09 01:33:52.146930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.416 [2024-10-09 01:33:52.149319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.416 [2024-10-09 01:33:52.149388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:53.416 [2024-10-09 01:33:52.149472] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:53.416 [2024-10-09 01:33:52.149548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:53.416 pt1 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.416 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.416 "name": "raid_bdev1", 00:13:53.416 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:53.416 "strip_size_kb": 64, 00:13:53.416 "state": "configuring", 00:13:53.416 "raid_level": "raid5f", 00:13:53.416 "superblock": true, 00:13:53.416 "num_base_bdevs": 3, 00:13:53.416 "num_base_bdevs_discovered": 1, 00:13:53.416 "num_base_bdevs_operational": 3, 00:13:53.416 "base_bdevs_list": [ 00:13:53.416 { 00:13:53.416 "name": "pt1", 00:13:53.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.416 "is_configured": true, 00:13:53.416 "data_offset": 2048, 00:13:53.416 "data_size": 63488 00:13:53.416 }, 00:13:53.416 { 00:13:53.416 "name": null, 00:13:53.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.416 "is_configured": false, 00:13:53.416 "data_offset": 2048, 00:13:53.416 "data_size": 63488 00:13:53.416 }, 00:13:53.416 { 00:13:53.416 "name": null, 00:13:53.416 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.416 "is_configured": false, 00:13:53.416 "data_offset": 2048, 00:13:53.417 "data_size": 63488 00:13:53.417 } 00:13:53.417 ] 00:13:53.417 }' 00:13:53.417 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.417 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.676 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:53.676 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:53.676 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.676 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.676 [2024-10-09 01:33:52.558887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:53.676 [2024-10-09 01:33:52.558948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.676 [2024-10-09 01:33:52.558970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:53.676 [2024-10-09 01:33:52.558979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.676 [2024-10-09 01:33:52.559340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.676 [2024-10-09 01:33:52.559355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:53.676 [2024-10-09 01:33:52.559416] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:53.676 [2024-10-09 01:33:52.559434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:53.676 pt2 00:13:53.676 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.676 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:53.676 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.676 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.935 [2024-10-09 01:33:52.570931] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.935 "name": "raid_bdev1", 00:13:53.935 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:53.935 "strip_size_kb": 64, 00:13:53.935 "state": "configuring", 00:13:53.935 "raid_level": "raid5f", 00:13:53.935 "superblock": true, 00:13:53.935 "num_base_bdevs": 3, 00:13:53.935 "num_base_bdevs_discovered": 1, 00:13:53.935 "num_base_bdevs_operational": 3, 00:13:53.935 "base_bdevs_list": [ 00:13:53.935 { 00:13:53.935 "name": "pt1", 00:13:53.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.935 "is_configured": true, 00:13:53.935 "data_offset": 2048, 00:13:53.935 "data_size": 63488 00:13:53.935 }, 00:13:53.935 { 00:13:53.935 "name": null, 00:13:53.935 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.935 "is_configured": false, 00:13:53.935 "data_offset": 0, 00:13:53.935 "data_size": 63488 00:13:53.935 }, 00:13:53.935 { 00:13:53.935 "name": null, 00:13:53.935 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.935 "is_configured": false, 00:13:53.935 "data_offset": 2048, 00:13:53.935 "data_size": 63488 00:13:53.935 } 00:13:53.935 ] 00:13:53.935 }' 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.935 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.195 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:54.195 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.195 01:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.195 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.195 01:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.195 [2024-10-09 01:33:53.002992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.195 [2024-10-09 01:33:53.003093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.195 [2024-10-09 01:33:53.003122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:54.195 [2024-10-09 01:33:53.003152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.195 [2024-10-09 01:33:53.003511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.195 [2024-10-09 01:33:53.003586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.195 [2024-10-09 01:33:53.003666] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.195 [2024-10-09 01:33:53.003716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.195 pt2 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.195 [2024-10-09 01:33:53.015006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:54.195 [2024-10-09 01:33:53.015090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.195 [2024-10-09 01:33:53.015116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:54.195 [2024-10-09 01:33:53.015145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.195 [2024-10-09 01:33:53.015504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.195 [2024-10-09 01:33:53.015572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:54.195 [2024-10-09 01:33:53.015648] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:54.195 [2024-10-09 01:33:53.015701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:54.195 [2024-10-09 01:33:53.015808] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:54.195 [2024-10-09 01:33:53.015821] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:54.195 [2024-10-09 01:33:53.016056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:54.195 [2024-10-09 01:33:53.016475] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:54.195 [2024-10-09 01:33:53.016486] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:54.195 [2024-10-09 01:33:53.016600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.195 pt3 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.195 "name": "raid_bdev1", 00:13:54.195 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:54.195 "strip_size_kb": 64, 00:13:54.195 "state": "online", 00:13:54.195 "raid_level": "raid5f", 00:13:54.195 "superblock": true, 00:13:54.195 "num_base_bdevs": 3, 00:13:54.195 "num_base_bdevs_discovered": 3, 00:13:54.195 "num_base_bdevs_operational": 3, 00:13:54.195 "base_bdevs_list": [ 00:13:54.195 { 00:13:54.195 "name": "pt1", 00:13:54.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.195 "is_configured": true, 00:13:54.195 "data_offset": 2048, 00:13:54.195 "data_size": 63488 00:13:54.195 }, 00:13:54.195 { 00:13:54.195 "name": "pt2", 00:13:54.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.195 "is_configured": true, 00:13:54.195 "data_offset": 2048, 00:13:54.195 "data_size": 63488 00:13:54.195 }, 00:13:54.195 { 00:13:54.195 "name": "pt3", 00:13:54.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.195 "is_configured": true, 00:13:54.195 "data_offset": 2048, 00:13:54.195 "data_size": 63488 00:13:54.195 } 00:13:54.195 ] 00:13:54.195 }' 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.195 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.764 [2024-10-09 01:33:53.535358] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:54.764 "name": "raid_bdev1", 00:13:54.764 "aliases": [ 00:13:54.764 "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0" 00:13:54.764 ], 00:13:54.764 "product_name": "Raid Volume", 00:13:54.764 "block_size": 512, 00:13:54.764 "num_blocks": 126976, 00:13:54.764 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:54.764 "assigned_rate_limits": { 00:13:54.764 "rw_ios_per_sec": 0, 00:13:54.764 "rw_mbytes_per_sec": 0, 00:13:54.764 "r_mbytes_per_sec": 0, 00:13:54.764 "w_mbytes_per_sec": 0 00:13:54.764 }, 00:13:54.764 "claimed": false, 00:13:54.764 "zoned": false, 00:13:54.764 "supported_io_types": { 00:13:54.764 "read": true, 00:13:54.764 "write": true, 00:13:54.764 "unmap": false, 00:13:54.764 "flush": false, 00:13:54.764 "reset": true, 00:13:54.764 "nvme_admin": false, 00:13:54.764 "nvme_io": false, 00:13:54.764 "nvme_io_md": false, 00:13:54.764 "write_zeroes": true, 00:13:54.764 "zcopy": false, 00:13:54.764 "get_zone_info": false, 00:13:54.764 "zone_management": false, 00:13:54.764 "zone_append": false, 00:13:54.764 "compare": false, 00:13:54.764 "compare_and_write": false, 00:13:54.764 "abort": false, 00:13:54.764 "seek_hole": false, 00:13:54.764 "seek_data": false, 00:13:54.764 "copy": false, 00:13:54.764 "nvme_iov_md": false 00:13:54.764 }, 00:13:54.764 "driver_specific": { 00:13:54.764 "raid": { 00:13:54.764 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:54.764 "strip_size_kb": 64, 00:13:54.764 "state": "online", 00:13:54.764 "raid_level": "raid5f", 00:13:54.764 "superblock": true, 00:13:54.764 "num_base_bdevs": 3, 00:13:54.764 "num_base_bdevs_discovered": 3, 00:13:54.764 "num_base_bdevs_operational": 3, 00:13:54.764 "base_bdevs_list": [ 00:13:54.764 { 00:13:54.764 "name": "pt1", 00:13:54.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.764 "is_configured": true, 00:13:54.764 "data_offset": 2048, 00:13:54.764 "data_size": 63488 00:13:54.764 }, 00:13:54.764 { 00:13:54.764 "name": "pt2", 00:13:54.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.764 "is_configured": true, 00:13:54.764 "data_offset": 2048, 00:13:54.764 "data_size": 63488 00:13:54.764 }, 00:13:54.764 { 00:13:54.764 "name": "pt3", 00:13:54.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.764 "is_configured": true, 00:13:54.764 "data_offset": 2048, 00:13:54.764 "data_size": 63488 00:13:54.764 } 00:13:54.764 ] 00:13:54.764 } 00:13:54.764 } 00:13:54.764 }' 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:54.764 pt2 00:13:54.764 pt3' 00:13:54.764 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:55.024 [2024-10-09 01:33:53.835368] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b6fc5cce-fb45-48a7-80cb-6a294fd20ec0 '!=' b6fc5cce-fb45-48a7-80cb-6a294fd20ec0 ']' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.024 [2024-10-09 01:33:53.883273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.024 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.284 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.284 "name": "raid_bdev1", 00:13:55.284 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:55.284 "strip_size_kb": 64, 00:13:55.284 "state": "online", 00:13:55.284 "raid_level": "raid5f", 00:13:55.284 "superblock": true, 00:13:55.284 "num_base_bdevs": 3, 00:13:55.284 "num_base_bdevs_discovered": 2, 00:13:55.284 "num_base_bdevs_operational": 2, 00:13:55.284 "base_bdevs_list": [ 00:13:55.284 { 00:13:55.284 "name": null, 00:13:55.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.284 "is_configured": false, 00:13:55.284 "data_offset": 0, 00:13:55.284 "data_size": 63488 00:13:55.284 }, 00:13:55.284 { 00:13:55.284 "name": "pt2", 00:13:55.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.284 "is_configured": true, 00:13:55.284 "data_offset": 2048, 00:13:55.284 "data_size": 63488 00:13:55.284 }, 00:13:55.284 { 00:13:55.284 "name": "pt3", 00:13:55.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.284 "is_configured": true, 00:13:55.284 "data_offset": 2048, 00:13:55.284 "data_size": 63488 00:13:55.284 } 00:13:55.284 ] 00:13:55.284 }' 00:13:55.284 01:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.284 01:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.543 [2024-10-09 01:33:54.367360] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:55.543 [2024-10-09 01:33:54.367384] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.543 [2024-10-09 01:33:54.367445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.543 [2024-10-09 01:33:54.367496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.543 [2024-10-09 01:33:54.367507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.543 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.803 [2024-10-09 01:33:54.455366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:55.803 [2024-10-09 01:33:54.455419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.803 [2024-10-09 01:33:54.455434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:55.803 [2024-10-09 01:33:54.455445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.803 [2024-10-09 01:33:54.457773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.803 [2024-10-09 01:33:54.457810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:55.803 [2024-10-09 01:33:54.457886] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:55.803 [2024-10-09 01:33:54.457924] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:55.803 pt2 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.803 "name": "raid_bdev1", 00:13:55.803 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:55.803 "strip_size_kb": 64, 00:13:55.803 "state": "configuring", 00:13:55.803 "raid_level": "raid5f", 00:13:55.803 "superblock": true, 00:13:55.803 "num_base_bdevs": 3, 00:13:55.803 "num_base_bdevs_discovered": 1, 00:13:55.803 "num_base_bdevs_operational": 2, 00:13:55.803 "base_bdevs_list": [ 00:13:55.803 { 00:13:55.803 "name": null, 00:13:55.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.803 "is_configured": false, 00:13:55.803 "data_offset": 2048, 00:13:55.803 "data_size": 63488 00:13:55.803 }, 00:13:55.803 { 00:13:55.803 "name": "pt2", 00:13:55.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.803 "is_configured": true, 00:13:55.803 "data_offset": 2048, 00:13:55.803 "data_size": 63488 00:13:55.803 }, 00:13:55.803 { 00:13:55.803 "name": null, 00:13:55.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.803 "is_configured": false, 00:13:55.803 "data_offset": 2048, 00:13:55.803 "data_size": 63488 00:13:55.803 } 00:13:55.803 ] 00:13:55.803 }' 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.803 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.063 [2024-10-09 01:33:54.927487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:56.063 [2024-10-09 01:33:54.927540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.063 [2024-10-09 01:33:54.927556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:56.063 [2024-10-09 01:33:54.927568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.063 [2024-10-09 01:33:54.927915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.063 [2024-10-09 01:33:54.927933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:56.063 [2024-10-09 01:33:54.927990] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:56.063 [2024-10-09 01:33:54.928020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:56.063 [2024-10-09 01:33:54.928109] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:56.063 [2024-10-09 01:33:54.928120] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:56.063 [2024-10-09 01:33:54.928354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:56.063 [2024-10-09 01:33:54.928844] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:56.063 [2024-10-09 01:33:54.928864] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:56.063 [2024-10-09 01:33:54.929134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.063 pt3 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.063 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.323 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.323 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.323 "name": "raid_bdev1", 00:13:56.323 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:56.323 "strip_size_kb": 64, 00:13:56.323 "state": "online", 00:13:56.323 "raid_level": "raid5f", 00:13:56.323 "superblock": true, 00:13:56.323 "num_base_bdevs": 3, 00:13:56.323 "num_base_bdevs_discovered": 2, 00:13:56.323 "num_base_bdevs_operational": 2, 00:13:56.323 "base_bdevs_list": [ 00:13:56.323 { 00:13:56.323 "name": null, 00:13:56.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.323 "is_configured": false, 00:13:56.323 "data_offset": 2048, 00:13:56.323 "data_size": 63488 00:13:56.323 }, 00:13:56.323 { 00:13:56.323 "name": "pt2", 00:13:56.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.323 "is_configured": true, 00:13:56.323 "data_offset": 2048, 00:13:56.323 "data_size": 63488 00:13:56.323 }, 00:13:56.323 { 00:13:56.323 "name": "pt3", 00:13:56.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.323 "is_configured": true, 00:13:56.323 "data_offset": 2048, 00:13:56.323 "data_size": 63488 00:13:56.323 } 00:13:56.323 ] 00:13:56.323 }' 00:13:56.323 01:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.323 01:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.582 [2024-10-09 01:33:55.395605] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.582 [2024-10-09 01:33:55.395631] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.582 [2024-10-09 01:33:55.395687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.582 [2024-10-09 01:33:55.395739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.582 [2024-10-09 01:33:55.395749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.582 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.583 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.583 [2024-10-09 01:33:55.467652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:56.583 [2024-10-09 01:33:55.468129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.583 [2024-10-09 01:33:55.468215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:56.583 [2024-10-09 01:33:55.468262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.583 [2024-10-09 01:33:55.470794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.583 [2024-10-09 01:33:55.470828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:56.583 [2024-10-09 01:33:55.470890] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:56.583 [2024-10-09 01:33:55.470919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:56.583 [2024-10-09 01:33:55.471018] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:56.583 [2024-10-09 01:33:55.471033] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.583 [2024-10-09 01:33:55.471051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:56.583 [2024-10-09 01:33:55.471094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:56.842 pt1 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.842 "name": "raid_bdev1", 00:13:56.842 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:56.842 "strip_size_kb": 64, 00:13:56.842 "state": "configuring", 00:13:56.842 "raid_level": "raid5f", 00:13:56.842 "superblock": true, 00:13:56.842 "num_base_bdevs": 3, 00:13:56.842 "num_base_bdevs_discovered": 1, 00:13:56.842 "num_base_bdevs_operational": 2, 00:13:56.842 "base_bdevs_list": [ 00:13:56.842 { 00:13:56.842 "name": null, 00:13:56.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.842 "is_configured": false, 00:13:56.842 "data_offset": 2048, 00:13:56.842 "data_size": 63488 00:13:56.842 }, 00:13:56.842 { 00:13:56.842 "name": "pt2", 00:13:56.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:56.842 "is_configured": true, 00:13:56.842 "data_offset": 2048, 00:13:56.842 "data_size": 63488 00:13:56.842 }, 00:13:56.842 { 00:13:56.842 "name": null, 00:13:56.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:56.842 "is_configured": false, 00:13:56.842 "data_offset": 2048, 00:13:56.842 "data_size": 63488 00:13:56.842 } 00:13:56.842 ] 00:13:56.842 }' 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.842 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.102 [2024-10-09 01:33:55.955798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:57.102 [2024-10-09 01:33:55.955849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.102 [2024-10-09 01:33:55.955867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:57.102 [2024-10-09 01:33:55.955877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.102 [2024-10-09 01:33:55.956267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.102 [2024-10-09 01:33:55.956283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:57.102 [2024-10-09 01:33:55.956344] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:57.102 [2024-10-09 01:33:55.956362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:57.102 [2024-10-09 01:33:55.956449] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:57.102 [2024-10-09 01:33:55.956457] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:57.102 [2024-10-09 01:33:55.956731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:57.102 [2024-10-09 01:33:55.957206] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:57.102 [2024-10-09 01:33:55.957229] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:57.102 [2024-10-09 01:33:55.957387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.102 pt3 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.102 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.361 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.361 "name": "raid_bdev1", 00:13:57.361 "uuid": "b6fc5cce-fb45-48a7-80cb-6a294fd20ec0", 00:13:57.361 "strip_size_kb": 64, 00:13:57.361 "state": "online", 00:13:57.361 "raid_level": "raid5f", 00:13:57.361 "superblock": true, 00:13:57.361 "num_base_bdevs": 3, 00:13:57.361 "num_base_bdevs_discovered": 2, 00:13:57.361 "num_base_bdevs_operational": 2, 00:13:57.361 "base_bdevs_list": [ 00:13:57.361 { 00:13:57.361 "name": null, 00:13:57.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.361 "is_configured": false, 00:13:57.361 "data_offset": 2048, 00:13:57.361 "data_size": 63488 00:13:57.361 }, 00:13:57.361 { 00:13:57.361 "name": "pt2", 00:13:57.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.361 "is_configured": true, 00:13:57.361 "data_offset": 2048, 00:13:57.361 "data_size": 63488 00:13:57.361 }, 00:13:57.361 { 00:13:57.361 "name": "pt3", 00:13:57.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.361 "is_configured": true, 00:13:57.361 "data_offset": 2048, 00:13:57.361 "data_size": 63488 00:13:57.361 } 00:13:57.361 ] 00:13:57.361 }' 00:13:57.361 01:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.361 01:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:57.621 [2024-10-09 01:33:56.444119] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b6fc5cce-fb45-48a7-80cb-6a294fd20ec0 '!=' b6fc5cce-fb45-48a7-80cb-6a294fd20ec0 ']' 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 92810 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 92810 ']' 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 92810 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:57.621 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92810 00:13:57.880 killing process with pid 92810 00:13:57.880 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:57.880 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:57.880 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92810' 00:13:57.880 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 92810 00:13:57.880 [2024-10-09 01:33:56.518896] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.880 [2024-10-09 01:33:56.518987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.880 [2024-10-09 01:33:56.519042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.880 [2024-10-09 01:33:56.519054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:57.880 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 92810 00:13:57.880 [2024-10-09 01:33:56.579496] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.141 01:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:58.141 00:13:58.141 real 0m6.860s 00:13:58.141 user 0m11.261s 00:13:58.141 sys 0m1.511s 00:13:58.141 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.141 01:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.141 ************************************ 00:13:58.141 END TEST raid5f_superblock_test 00:13:58.141 ************************************ 00:13:58.141 01:33:57 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:58.141 01:33:57 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:58.141 01:33:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:58.141 01:33:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.141 01:33:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:58.401 ************************************ 00:13:58.401 START TEST raid5f_rebuild_test 00:13:58.401 ************************************ 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:58.401 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=93243 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 93243 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 93243 ']' 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:58.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:58.402 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.402 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.402 Zero copy mechanism will not be used. 00:13:58.402 [2024-10-09 01:33:57.149994] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:58.402 [2024-10-09 01:33:57.150142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93243 ] 00:13:58.402 [2024-10-09 01:33:57.286586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:58.661 [2024-10-09 01:33:57.315278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.661 [2024-10-09 01:33:57.386812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.661 [2024-10-09 01:33:57.463181] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.661 [2024-10-09 01:33:57.463222] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 BaseBdev1_malloc 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 [2024-10-09 01:33:57.982649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:59.231 [2024-10-09 01:33:57.982724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.231 [2024-10-09 01:33:57.982754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.231 [2024-10-09 01:33:57.982772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.231 [2024-10-09 01:33:57.985076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.231 [2024-10-09 01:33:57.985115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.231 BaseBdev1 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 BaseBdev2_malloc 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 [2024-10-09 01:33:58.033520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.231 [2024-10-09 01:33:58.033648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.231 [2024-10-09 01:33:58.033689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.231 [2024-10-09 01:33:58.033715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.231 [2024-10-09 01:33:58.038431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.231 [2024-10-09 01:33:58.038499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.231 BaseBdev2 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 BaseBdev3_malloc 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 [2024-10-09 01:33:58.071457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:59.231 [2024-10-09 01:33:58.071509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.231 [2024-10-09 01:33:58.071541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:59.231 [2024-10-09 01:33:58.071553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.231 [2024-10-09 01:33:58.073826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.231 [2024-10-09 01:33:58.073882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:59.231 BaseBdev3 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 spare_malloc 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 spare_delay 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.231 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.231 [2024-10-09 01:33:58.118127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.231 [2024-10-09 01:33:58.118180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.231 [2024-10-09 01:33:58.118201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:59.231 [2024-10-09 01:33:58.118212] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.231 [2024-10-09 01:33:58.120673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.231 [2024-10-09 01:33:58.120711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.491 spare 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.491 [2024-10-09 01:33:58.130212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.491 [2024-10-09 01:33:58.132231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.491 [2024-10-09 01:33:58.132296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.491 [2024-10-09 01:33:58.132377] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:59.491 [2024-10-09 01:33:58.132391] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:59.491 [2024-10-09 01:33:58.132641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:59.491 [2024-10-09 01:33:58.133105] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:59.491 [2024-10-09 01:33:58.133127] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:59.491 [2024-10-09 01:33:58.133257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.491 "name": "raid_bdev1", 00:13:59.491 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:13:59.491 "strip_size_kb": 64, 00:13:59.491 "state": "online", 00:13:59.491 "raid_level": "raid5f", 00:13:59.491 "superblock": false, 00:13:59.491 "num_base_bdevs": 3, 00:13:59.491 "num_base_bdevs_discovered": 3, 00:13:59.491 "num_base_bdevs_operational": 3, 00:13:59.491 "base_bdevs_list": [ 00:13:59.491 { 00:13:59.491 "name": "BaseBdev1", 00:13:59.491 "uuid": "fe84ea4e-d035-5b75-9669-442c7f8bbca0", 00:13:59.491 "is_configured": true, 00:13:59.491 "data_offset": 0, 00:13:59.491 "data_size": 65536 00:13:59.491 }, 00:13:59.491 { 00:13:59.491 "name": "BaseBdev2", 00:13:59.491 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:13:59.491 "is_configured": true, 00:13:59.491 "data_offset": 0, 00:13:59.491 "data_size": 65536 00:13:59.491 }, 00:13:59.491 { 00:13:59.491 "name": "BaseBdev3", 00:13:59.491 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:13:59.491 "is_configured": true, 00:13:59.491 "data_offset": 0, 00:13:59.491 "data_size": 65536 00:13:59.491 } 00:13:59.491 ] 00:13:59.491 }' 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.491 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:59.751 [2024-10-09 01:33:58.575786] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:59.751 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:00.010 [2024-10-09 01:33:58.835797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:00.010 /dev/nbd0 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.010 1+0 records in 00:14:00.010 1+0 records out 00:14:00.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041376 s, 9.9 MB/s 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:00.010 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.270 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:00.270 01:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:00.270 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.270 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.270 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:00.270 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:00.270 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:00.270 01:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:00.529 512+0 records in 00:14:00.529 512+0 records out 00:14:00.529 67108864 bytes (67 MB, 64 MiB) copied, 0.357668 s, 188 MB/s 00:14:00.529 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:00.529 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.529 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:00.529 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.529 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:00.529 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.529 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:00.788 [2024-10-09 01:33:59.466813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.788 [2024-10-09 01:33:59.494896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.788 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.788 "name": "raid_bdev1", 00:14:00.788 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:00.788 "strip_size_kb": 64, 00:14:00.789 "state": "online", 00:14:00.789 "raid_level": "raid5f", 00:14:00.789 "superblock": false, 00:14:00.789 "num_base_bdevs": 3, 00:14:00.789 "num_base_bdevs_discovered": 2, 00:14:00.789 "num_base_bdevs_operational": 2, 00:14:00.789 "base_bdevs_list": [ 00:14:00.789 { 00:14:00.789 "name": null, 00:14:00.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.789 "is_configured": false, 00:14:00.789 "data_offset": 0, 00:14:00.789 "data_size": 65536 00:14:00.789 }, 00:14:00.789 { 00:14:00.789 "name": "BaseBdev2", 00:14:00.789 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:00.789 "is_configured": true, 00:14:00.789 "data_offset": 0, 00:14:00.789 "data_size": 65536 00:14:00.789 }, 00:14:00.789 { 00:14:00.789 "name": "BaseBdev3", 00:14:00.789 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:00.789 "is_configured": true, 00:14:00.789 "data_offset": 0, 00:14:00.789 "data_size": 65536 00:14:00.789 } 00:14:00.789 ] 00:14:00.789 }' 00:14:00.789 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.789 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.357 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.357 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.357 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.357 [2024-10-09 01:33:59.967028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.357 [2024-10-09 01:33:59.973586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:01.357 01:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.357 01:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:01.357 [2024-10-09 01:33:59.976032] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.295 01:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.295 "name": "raid_bdev1", 00:14:02.295 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:02.295 "strip_size_kb": 64, 00:14:02.295 "state": "online", 00:14:02.295 "raid_level": "raid5f", 00:14:02.295 "superblock": false, 00:14:02.295 "num_base_bdevs": 3, 00:14:02.295 "num_base_bdevs_discovered": 3, 00:14:02.295 "num_base_bdevs_operational": 3, 00:14:02.295 "process": { 00:14:02.295 "type": "rebuild", 00:14:02.295 "target": "spare", 00:14:02.295 "progress": { 00:14:02.295 "blocks": 20480, 00:14:02.295 "percent": 15 00:14:02.295 } 00:14:02.295 }, 00:14:02.295 "base_bdevs_list": [ 00:14:02.295 { 00:14:02.295 "name": "spare", 00:14:02.295 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:02.295 "is_configured": true, 00:14:02.295 "data_offset": 0, 00:14:02.295 "data_size": 65536 00:14:02.295 }, 00:14:02.295 { 00:14:02.295 "name": "BaseBdev2", 00:14:02.295 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:02.295 "is_configured": true, 00:14:02.295 "data_offset": 0, 00:14:02.295 "data_size": 65536 00:14:02.295 }, 00:14:02.295 { 00:14:02.295 "name": "BaseBdev3", 00:14:02.295 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:02.295 "is_configured": true, 00:14:02.295 "data_offset": 0, 00:14:02.295 "data_size": 65536 00:14:02.295 } 00:14:02.295 ] 00:14:02.295 }' 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.295 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.295 [2024-10-09 01:34:01.113610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.295 [2024-10-09 01:34:01.186412] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.295 [2024-10-09 01:34:01.186487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.295 [2024-10-09 01:34:01.186508] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.295 [2024-10-09 01:34:01.186516] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.554 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.554 "name": "raid_bdev1", 00:14:02.554 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:02.554 "strip_size_kb": 64, 00:14:02.554 "state": "online", 00:14:02.554 "raid_level": "raid5f", 00:14:02.554 "superblock": false, 00:14:02.554 "num_base_bdevs": 3, 00:14:02.554 "num_base_bdevs_discovered": 2, 00:14:02.554 "num_base_bdevs_operational": 2, 00:14:02.554 "base_bdevs_list": [ 00:14:02.554 { 00:14:02.554 "name": null, 00:14:02.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.554 "is_configured": false, 00:14:02.555 "data_offset": 0, 00:14:02.555 "data_size": 65536 00:14:02.555 }, 00:14:02.555 { 00:14:02.555 "name": "BaseBdev2", 00:14:02.555 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:02.555 "is_configured": true, 00:14:02.555 "data_offset": 0, 00:14:02.555 "data_size": 65536 00:14:02.555 }, 00:14:02.555 { 00:14:02.555 "name": "BaseBdev3", 00:14:02.555 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:02.555 "is_configured": true, 00:14:02.555 "data_offset": 0, 00:14:02.555 "data_size": 65536 00:14:02.555 } 00:14:02.555 ] 00:14:02.555 }' 00:14:02.555 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.555 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.814 "name": "raid_bdev1", 00:14:02.814 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:02.814 "strip_size_kb": 64, 00:14:02.814 "state": "online", 00:14:02.814 "raid_level": "raid5f", 00:14:02.814 "superblock": false, 00:14:02.814 "num_base_bdevs": 3, 00:14:02.814 "num_base_bdevs_discovered": 2, 00:14:02.814 "num_base_bdevs_operational": 2, 00:14:02.814 "base_bdevs_list": [ 00:14:02.814 { 00:14:02.814 "name": null, 00:14:02.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.814 "is_configured": false, 00:14:02.814 "data_offset": 0, 00:14:02.814 "data_size": 65536 00:14:02.814 }, 00:14:02.814 { 00:14:02.814 "name": "BaseBdev2", 00:14:02.814 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:02.814 "is_configured": true, 00:14:02.814 "data_offset": 0, 00:14:02.814 "data_size": 65536 00:14:02.814 }, 00:14:02.814 { 00:14:02.814 "name": "BaseBdev3", 00:14:02.814 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:02.814 "is_configured": true, 00:14:02.814 "data_offset": 0, 00:14:02.814 "data_size": 65536 00:14:02.814 } 00:14:02.814 ] 00:14:02.814 }' 00:14:02.814 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.073 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.073 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.073 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.073 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.073 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.073 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.073 [2024-10-09 01:34:01.771693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.073 [2024-10-09 01:34:01.776066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:14:03.073 01:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.073 01:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:03.073 [2024-10-09 01:34:01.778482] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.011 "name": "raid_bdev1", 00:14:04.011 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:04.011 "strip_size_kb": 64, 00:14:04.011 "state": "online", 00:14:04.011 "raid_level": "raid5f", 00:14:04.011 "superblock": false, 00:14:04.011 "num_base_bdevs": 3, 00:14:04.011 "num_base_bdevs_discovered": 3, 00:14:04.011 "num_base_bdevs_operational": 3, 00:14:04.011 "process": { 00:14:04.011 "type": "rebuild", 00:14:04.011 "target": "spare", 00:14:04.011 "progress": { 00:14:04.011 "blocks": 20480, 00:14:04.011 "percent": 15 00:14:04.011 } 00:14:04.011 }, 00:14:04.011 "base_bdevs_list": [ 00:14:04.011 { 00:14:04.011 "name": "spare", 00:14:04.011 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:04.011 "is_configured": true, 00:14:04.011 "data_offset": 0, 00:14:04.011 "data_size": 65536 00:14:04.011 }, 00:14:04.011 { 00:14:04.011 "name": "BaseBdev2", 00:14:04.011 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:04.011 "is_configured": true, 00:14:04.011 "data_offset": 0, 00:14:04.011 "data_size": 65536 00:14:04.011 }, 00:14:04.011 { 00:14:04.011 "name": "BaseBdev3", 00:14:04.011 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:04.011 "is_configured": true, 00:14:04.011 "data_offset": 0, 00:14:04.011 "data_size": 65536 00:14:04.011 } 00:14:04.011 ] 00:14:04.011 }' 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.011 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=458 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.270 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.270 "name": "raid_bdev1", 00:14:04.270 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:04.270 "strip_size_kb": 64, 00:14:04.270 "state": "online", 00:14:04.270 "raid_level": "raid5f", 00:14:04.270 "superblock": false, 00:14:04.271 "num_base_bdevs": 3, 00:14:04.271 "num_base_bdevs_discovered": 3, 00:14:04.271 "num_base_bdevs_operational": 3, 00:14:04.271 "process": { 00:14:04.271 "type": "rebuild", 00:14:04.271 "target": "spare", 00:14:04.271 "progress": { 00:14:04.271 "blocks": 22528, 00:14:04.271 "percent": 17 00:14:04.271 } 00:14:04.271 }, 00:14:04.271 "base_bdevs_list": [ 00:14:04.271 { 00:14:04.271 "name": "spare", 00:14:04.271 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:04.271 "is_configured": true, 00:14:04.271 "data_offset": 0, 00:14:04.271 "data_size": 65536 00:14:04.271 }, 00:14:04.271 { 00:14:04.271 "name": "BaseBdev2", 00:14:04.271 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:04.271 "is_configured": true, 00:14:04.271 "data_offset": 0, 00:14:04.271 "data_size": 65536 00:14:04.271 }, 00:14:04.271 { 00:14:04.271 "name": "BaseBdev3", 00:14:04.271 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:04.271 "is_configured": true, 00:14:04.271 "data_offset": 0, 00:14:04.271 "data_size": 65536 00:14:04.271 } 00:14:04.271 ] 00:14:04.271 }' 00:14:04.271 01:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.271 01:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.271 01:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.271 01:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.271 01:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.244 "name": "raid_bdev1", 00:14:05.244 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:05.244 "strip_size_kb": 64, 00:14:05.244 "state": "online", 00:14:05.244 "raid_level": "raid5f", 00:14:05.244 "superblock": false, 00:14:05.244 "num_base_bdevs": 3, 00:14:05.244 "num_base_bdevs_discovered": 3, 00:14:05.244 "num_base_bdevs_operational": 3, 00:14:05.244 "process": { 00:14:05.244 "type": "rebuild", 00:14:05.244 "target": "spare", 00:14:05.244 "progress": { 00:14:05.244 "blocks": 45056, 00:14:05.244 "percent": 34 00:14:05.244 } 00:14:05.244 }, 00:14:05.244 "base_bdevs_list": [ 00:14:05.244 { 00:14:05.244 "name": "spare", 00:14:05.244 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:05.244 "is_configured": true, 00:14:05.244 "data_offset": 0, 00:14:05.244 "data_size": 65536 00:14:05.244 }, 00:14:05.244 { 00:14:05.244 "name": "BaseBdev2", 00:14:05.244 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:05.244 "is_configured": true, 00:14:05.244 "data_offset": 0, 00:14:05.244 "data_size": 65536 00:14:05.244 }, 00:14:05.244 { 00:14:05.244 "name": "BaseBdev3", 00:14:05.244 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:05.244 "is_configured": true, 00:14:05.244 "data_offset": 0, 00:14:05.244 "data_size": 65536 00:14:05.244 } 00:14:05.244 ] 00:14:05.244 }' 00:14:05.244 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.503 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.503 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.503 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.503 01:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.440 "name": "raid_bdev1", 00:14:06.440 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:06.440 "strip_size_kb": 64, 00:14:06.440 "state": "online", 00:14:06.440 "raid_level": "raid5f", 00:14:06.440 "superblock": false, 00:14:06.440 "num_base_bdevs": 3, 00:14:06.440 "num_base_bdevs_discovered": 3, 00:14:06.440 "num_base_bdevs_operational": 3, 00:14:06.440 "process": { 00:14:06.440 "type": "rebuild", 00:14:06.440 "target": "spare", 00:14:06.440 "progress": { 00:14:06.440 "blocks": 69632, 00:14:06.440 "percent": 53 00:14:06.440 } 00:14:06.440 }, 00:14:06.440 "base_bdevs_list": [ 00:14:06.440 { 00:14:06.440 "name": "spare", 00:14:06.440 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:06.440 "is_configured": true, 00:14:06.440 "data_offset": 0, 00:14:06.440 "data_size": 65536 00:14:06.440 }, 00:14:06.440 { 00:14:06.440 "name": "BaseBdev2", 00:14:06.440 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:06.440 "is_configured": true, 00:14:06.440 "data_offset": 0, 00:14:06.440 "data_size": 65536 00:14:06.440 }, 00:14:06.440 { 00:14:06.440 "name": "BaseBdev3", 00:14:06.440 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:06.440 "is_configured": true, 00:14:06.440 "data_offset": 0, 00:14:06.440 "data_size": 65536 00:14:06.440 } 00:14:06.440 ] 00:14:06.440 }' 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.440 01:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.818 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.818 "name": "raid_bdev1", 00:14:07.818 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:07.818 "strip_size_kb": 64, 00:14:07.818 "state": "online", 00:14:07.818 "raid_level": "raid5f", 00:14:07.818 "superblock": false, 00:14:07.818 "num_base_bdevs": 3, 00:14:07.818 "num_base_bdevs_discovered": 3, 00:14:07.818 "num_base_bdevs_operational": 3, 00:14:07.818 "process": { 00:14:07.818 "type": "rebuild", 00:14:07.818 "target": "spare", 00:14:07.818 "progress": { 00:14:07.818 "blocks": 92160, 00:14:07.819 "percent": 70 00:14:07.819 } 00:14:07.819 }, 00:14:07.819 "base_bdevs_list": [ 00:14:07.819 { 00:14:07.819 "name": "spare", 00:14:07.819 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:07.819 "is_configured": true, 00:14:07.819 "data_offset": 0, 00:14:07.819 "data_size": 65536 00:14:07.819 }, 00:14:07.819 { 00:14:07.819 "name": "BaseBdev2", 00:14:07.819 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:07.819 "is_configured": true, 00:14:07.819 "data_offset": 0, 00:14:07.819 "data_size": 65536 00:14:07.819 }, 00:14:07.819 { 00:14:07.819 "name": "BaseBdev3", 00:14:07.819 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:07.819 "is_configured": true, 00:14:07.819 "data_offset": 0, 00:14:07.819 "data_size": 65536 00:14:07.819 } 00:14:07.819 ] 00:14:07.819 }' 00:14:07.819 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.819 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.819 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.819 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.819 01:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.756 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.756 "name": "raid_bdev1", 00:14:08.756 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:08.756 "strip_size_kb": 64, 00:14:08.756 "state": "online", 00:14:08.756 "raid_level": "raid5f", 00:14:08.756 "superblock": false, 00:14:08.756 "num_base_bdevs": 3, 00:14:08.756 "num_base_bdevs_discovered": 3, 00:14:08.756 "num_base_bdevs_operational": 3, 00:14:08.756 "process": { 00:14:08.756 "type": "rebuild", 00:14:08.756 "target": "spare", 00:14:08.756 "progress": { 00:14:08.756 "blocks": 114688, 00:14:08.756 "percent": 87 00:14:08.756 } 00:14:08.756 }, 00:14:08.756 "base_bdevs_list": [ 00:14:08.756 { 00:14:08.756 "name": "spare", 00:14:08.756 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:08.756 "is_configured": true, 00:14:08.756 "data_offset": 0, 00:14:08.756 "data_size": 65536 00:14:08.756 }, 00:14:08.756 { 00:14:08.756 "name": "BaseBdev2", 00:14:08.756 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:08.756 "is_configured": true, 00:14:08.756 "data_offset": 0, 00:14:08.756 "data_size": 65536 00:14:08.756 }, 00:14:08.756 { 00:14:08.756 "name": "BaseBdev3", 00:14:08.756 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:08.756 "is_configured": true, 00:14:08.756 "data_offset": 0, 00:14:08.757 "data_size": 65536 00:14:08.757 } 00:14:08.757 ] 00:14:08.757 }' 00:14:08.757 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.757 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.757 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.757 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.757 01:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.694 [2024-10-09 01:34:08.230207] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:09.694 [2024-10-09 01:34:08.230287] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:09.694 [2024-10-09 01:34:08.230330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.953 "name": "raid_bdev1", 00:14:09.953 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:09.953 "strip_size_kb": 64, 00:14:09.953 "state": "online", 00:14:09.953 "raid_level": "raid5f", 00:14:09.953 "superblock": false, 00:14:09.953 "num_base_bdevs": 3, 00:14:09.953 "num_base_bdevs_discovered": 3, 00:14:09.953 "num_base_bdevs_operational": 3, 00:14:09.953 "base_bdevs_list": [ 00:14:09.953 { 00:14:09.953 "name": "spare", 00:14:09.953 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:09.953 "is_configured": true, 00:14:09.953 "data_offset": 0, 00:14:09.953 "data_size": 65536 00:14:09.953 }, 00:14:09.953 { 00:14:09.953 "name": "BaseBdev2", 00:14:09.953 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:09.953 "is_configured": true, 00:14:09.953 "data_offset": 0, 00:14:09.953 "data_size": 65536 00:14:09.953 }, 00:14:09.953 { 00:14:09.953 "name": "BaseBdev3", 00:14:09.953 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:09.953 "is_configured": true, 00:14:09.953 "data_offset": 0, 00:14:09.953 "data_size": 65536 00:14:09.953 } 00:14:09.953 ] 00:14:09.953 }' 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.953 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.953 "name": "raid_bdev1", 00:14:09.953 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:09.953 "strip_size_kb": 64, 00:14:09.954 "state": "online", 00:14:09.954 "raid_level": "raid5f", 00:14:09.954 "superblock": false, 00:14:09.954 "num_base_bdevs": 3, 00:14:09.954 "num_base_bdevs_discovered": 3, 00:14:09.954 "num_base_bdevs_operational": 3, 00:14:09.954 "base_bdevs_list": [ 00:14:09.954 { 00:14:09.954 "name": "spare", 00:14:09.954 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:09.954 "is_configured": true, 00:14:09.954 "data_offset": 0, 00:14:09.954 "data_size": 65536 00:14:09.954 }, 00:14:09.954 { 00:14:09.954 "name": "BaseBdev2", 00:14:09.954 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:09.954 "is_configured": true, 00:14:09.954 "data_offset": 0, 00:14:09.954 "data_size": 65536 00:14:09.954 }, 00:14:09.954 { 00:14:09.954 "name": "BaseBdev3", 00:14:09.954 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:09.954 "is_configured": true, 00:14:09.954 "data_offset": 0, 00:14:09.954 "data_size": 65536 00:14:09.954 } 00:14:09.954 ] 00:14:09.954 }' 00:14:09.954 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.213 "name": "raid_bdev1", 00:14:10.213 "uuid": "48ab606b-a612-4394-a96b-13c236b99e16", 00:14:10.213 "strip_size_kb": 64, 00:14:10.213 "state": "online", 00:14:10.213 "raid_level": "raid5f", 00:14:10.213 "superblock": false, 00:14:10.213 "num_base_bdevs": 3, 00:14:10.213 "num_base_bdevs_discovered": 3, 00:14:10.213 "num_base_bdevs_operational": 3, 00:14:10.213 "base_bdevs_list": [ 00:14:10.213 { 00:14:10.213 "name": "spare", 00:14:10.213 "uuid": "9c0e43a5-fc5f-5164-b1fd-7e6a37792d38", 00:14:10.213 "is_configured": true, 00:14:10.213 "data_offset": 0, 00:14:10.213 "data_size": 65536 00:14:10.213 }, 00:14:10.213 { 00:14:10.213 "name": "BaseBdev2", 00:14:10.213 "uuid": "ca1a538b-5447-5e70-81cf-a89db5940d1e", 00:14:10.213 "is_configured": true, 00:14:10.213 "data_offset": 0, 00:14:10.213 "data_size": 65536 00:14:10.213 }, 00:14:10.213 { 00:14:10.213 "name": "BaseBdev3", 00:14:10.213 "uuid": "a4c2f684-4fbf-5933-b722-fedad897bfc4", 00:14:10.213 "is_configured": true, 00:14:10.213 "data_offset": 0, 00:14:10.213 "data_size": 65536 00:14:10.213 } 00:14:10.213 ] 00:14:10.213 }' 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.213 01:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.473 [2024-10-09 01:34:09.299192] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.473 [2024-10-09 01:34:09.299230] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.473 [2024-10-09 01:34:09.299311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.473 [2024-10-09 01:34:09.299389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.473 [2024-10-09 01:34:09.299402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.473 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:10.732 /dev/nbd0 00:14:10.732 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:10.732 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:10.732 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:10.732 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:10.732 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.733 1+0 records in 00:14:10.733 1+0 records out 00:14:10.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494083 s, 8.3 MB/s 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.733 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:10.992 /dev/nbd1 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.992 1+0 records in 00:14:10.992 1+0 records out 00:14:10.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376995 s, 10.9 MB/s 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.992 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:11.251 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:11.251 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.251 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.251 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.251 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:11.251 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.252 01:34:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:11.252 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.252 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.252 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.252 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.252 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.252 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 93243 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 93243 ']' 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 93243 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.511 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93243 00:14:11.771 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.771 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.771 killing process with pid 93243 00:14:11.771 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93243' 00:14:11.771 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 93243 00:14:11.771 Received shutdown signal, test time was about 60.000000 seconds 00:14:11.771 00:14:11.771 Latency(us) 00:14:11.771 [2024-10-09T01:34:10.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.771 [2024-10-09T01:34:10.664Z] =================================================================================================================== 00:14:11.771 [2024-10-09T01:34:10.664Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.771 [2024-10-09 01:34:10.404968] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.772 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 93243 00:14:11.772 [2024-10-09 01:34:10.478688] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:12.032 00:14:12.032 real 0m13.800s 00:14:12.032 user 0m16.917s 00:14:12.032 sys 0m2.215s 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.032 ************************************ 00:14:12.032 END TEST raid5f_rebuild_test 00:14:12.032 ************************************ 00:14:12.032 01:34:10 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:12.032 01:34:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:12.032 01:34:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.032 01:34:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.032 ************************************ 00:14:12.032 START TEST raid5f_rebuild_test_sb 00:14:12.032 ************************************ 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:12.032 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=93661 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 93661 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93661 ']' 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.293 01:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.293 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.293 Zero copy mechanism will not be used. 00:14:12.293 [2024-10-09 01:34:11.028449] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:12.293 [2024-10-09 01:34:11.028591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93661 ] 00:14:12.293 [2024-10-09 01:34:11.165934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:12.553 [2024-10-09 01:34:11.196118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.553 [2024-10-09 01:34:11.269620] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.553 [2024-10-09 01:34:11.345688] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.553 [2024-10-09 01:34:11.345732] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 BaseBdev1_malloc 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 [2024-10-09 01:34:11.852016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:13.122 [2024-10-09 01:34:11.852096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.122 [2024-10-09 01:34:11.852124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:13.122 [2024-10-09 01:34:11.852141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.122 [2024-10-09 01:34:11.854497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.122 [2024-10-09 01:34:11.854544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:13.122 BaseBdev1 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 BaseBdev2_malloc 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 [2024-10-09 01:34:11.901555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:13.122 [2024-10-09 01:34:11.901655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.122 [2024-10-09 01:34:11.901696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:13.122 [2024-10-09 01:34:11.901722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.122 [2024-10-09 01:34:11.906461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.122 [2024-10-09 01:34:11.906586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:13.122 BaseBdev2 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 BaseBdev3_malloc 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 [2024-10-09 01:34:11.939319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:13.122 [2024-10-09 01:34:11.939371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.122 [2024-10-09 01:34:11.939393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:13.122 [2024-10-09 01:34:11.939404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.122 [2024-10-09 01:34:11.941730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.122 [2024-10-09 01:34:11.941769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:13.122 BaseBdev3 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 spare_malloc 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 spare_delay 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.122 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.122 [2024-10-09 01:34:11.985779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:13.122 [2024-10-09 01:34:11.985831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.123 [2024-10-09 01:34:11.985851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:13.123 [2024-10-09 01:34:11.985862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.123 [2024-10-09 01:34:11.988140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.123 [2024-10-09 01:34:11.988176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:13.123 spare 00:14:13.123 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.123 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:13.123 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.123 01:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.123 [2024-10-09 01:34:11.997875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.123 [2024-10-09 01:34:11.999841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.123 [2024-10-09 01:34:11.999904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.123 [2024-10-09 01:34:12.000061] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:13.123 [2024-10-09 01:34:12.000079] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:13.123 [2024-10-09 01:34:12.000319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:13.123 [2024-10-09 01:34:12.000756] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:13.123 [2024-10-09 01:34:12.000777] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:13.123 [2024-10-09 01:34:12.000893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.123 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.382 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.382 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.382 "name": "raid_bdev1", 00:14:13.382 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:13.382 "strip_size_kb": 64, 00:14:13.382 "state": "online", 00:14:13.382 "raid_level": "raid5f", 00:14:13.382 "superblock": true, 00:14:13.382 "num_base_bdevs": 3, 00:14:13.382 "num_base_bdevs_discovered": 3, 00:14:13.382 "num_base_bdevs_operational": 3, 00:14:13.382 "base_bdevs_list": [ 00:14:13.382 { 00:14:13.382 "name": "BaseBdev1", 00:14:13.382 "uuid": "c6978447-b727-5302-a4f1-2607255bd83b", 00:14:13.382 "is_configured": true, 00:14:13.382 "data_offset": 2048, 00:14:13.382 "data_size": 63488 00:14:13.382 }, 00:14:13.382 { 00:14:13.382 "name": "BaseBdev2", 00:14:13.382 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:13.382 "is_configured": true, 00:14:13.382 "data_offset": 2048, 00:14:13.382 "data_size": 63488 00:14:13.382 }, 00:14:13.382 { 00:14:13.382 "name": "BaseBdev3", 00:14:13.382 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:13.382 "is_configured": true, 00:14:13.382 "data_offset": 2048, 00:14:13.382 "data_size": 63488 00:14:13.382 } 00:14:13.382 ] 00:14:13.382 }' 00:14:13.382 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.382 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.641 [2024-10-09 01:34:12.467413] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.641 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:13.901 [2024-10-09 01:34:12.735400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:13.901 /dev/nbd0 00:14:13.901 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.161 1+0 records in 00:14:14.161 1+0 records out 00:14:14.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394658 s, 10.4 MB/s 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:14.161 01:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:14.420 496+0 records in 00:14:14.420 496+0 records out 00:14:14.420 65011712 bytes (65 MB, 62 MiB) copied, 0.311162 s, 209 MB/s 00:14:14.420 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:14.420 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.420 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.420 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.420 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:14.420 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.420 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.680 [2024-10-09 01:34:13.351205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.680 [2024-10-09 01:34:13.367291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.680 "name": "raid_bdev1", 00:14:14.680 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:14.680 "strip_size_kb": 64, 00:14:14.680 "state": "online", 00:14:14.680 "raid_level": "raid5f", 00:14:14.680 "superblock": true, 00:14:14.680 "num_base_bdevs": 3, 00:14:14.680 "num_base_bdevs_discovered": 2, 00:14:14.680 "num_base_bdevs_operational": 2, 00:14:14.680 "base_bdevs_list": [ 00:14:14.680 { 00:14:14.680 "name": null, 00:14:14.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.680 "is_configured": false, 00:14:14.680 "data_offset": 0, 00:14:14.680 "data_size": 63488 00:14:14.680 }, 00:14:14.680 { 00:14:14.680 "name": "BaseBdev2", 00:14:14.680 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:14.680 "is_configured": true, 00:14:14.680 "data_offset": 2048, 00:14:14.680 "data_size": 63488 00:14:14.680 }, 00:14:14.680 { 00:14:14.680 "name": "BaseBdev3", 00:14:14.680 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:14.680 "is_configured": true, 00:14:14.680 "data_offset": 2048, 00:14:14.680 "data_size": 63488 00:14:14.680 } 00:14:14.680 ] 00:14:14.680 }' 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.680 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.940 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.940 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.940 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.940 [2024-10-09 01:34:13.787371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.940 [2024-10-09 01:34:13.793924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:14:14.940 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.940 01:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:14.940 [2024-10-09 01:34:13.796259] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.320 "name": "raid_bdev1", 00:14:16.320 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:16.320 "strip_size_kb": 64, 00:14:16.320 "state": "online", 00:14:16.320 "raid_level": "raid5f", 00:14:16.320 "superblock": true, 00:14:16.320 "num_base_bdevs": 3, 00:14:16.320 "num_base_bdevs_discovered": 3, 00:14:16.320 "num_base_bdevs_operational": 3, 00:14:16.320 "process": { 00:14:16.320 "type": "rebuild", 00:14:16.320 "target": "spare", 00:14:16.320 "progress": { 00:14:16.320 "blocks": 20480, 00:14:16.320 "percent": 16 00:14:16.320 } 00:14:16.320 }, 00:14:16.320 "base_bdevs_list": [ 00:14:16.320 { 00:14:16.320 "name": "spare", 00:14:16.320 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:16.320 "is_configured": true, 00:14:16.320 "data_offset": 2048, 00:14:16.320 "data_size": 63488 00:14:16.320 }, 00:14:16.320 { 00:14:16.320 "name": "BaseBdev2", 00:14:16.320 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:16.320 "is_configured": true, 00:14:16.320 "data_offset": 2048, 00:14:16.320 "data_size": 63488 00:14:16.320 }, 00:14:16.320 { 00:14:16.320 "name": "BaseBdev3", 00:14:16.320 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:16.320 "is_configured": true, 00:14:16.320 "data_offset": 2048, 00:14:16.320 "data_size": 63488 00:14:16.320 } 00:14:16.320 ] 00:14:16.320 }' 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.320 01:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.320 [2024-10-09 01:34:14.953592] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.320 [2024-10-09 01:34:15.007145] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.320 [2024-10-09 01:34:15.007198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.320 [2024-10-09 01:34:15.007216] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.320 [2024-10-09 01:34:15.007224] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.320 "name": "raid_bdev1", 00:14:16.320 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:16.320 "strip_size_kb": 64, 00:14:16.320 "state": "online", 00:14:16.320 "raid_level": "raid5f", 00:14:16.320 "superblock": true, 00:14:16.320 "num_base_bdevs": 3, 00:14:16.320 "num_base_bdevs_discovered": 2, 00:14:16.320 "num_base_bdevs_operational": 2, 00:14:16.320 "base_bdevs_list": [ 00:14:16.320 { 00:14:16.320 "name": null, 00:14:16.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.320 "is_configured": false, 00:14:16.320 "data_offset": 0, 00:14:16.320 "data_size": 63488 00:14:16.320 }, 00:14:16.320 { 00:14:16.320 "name": "BaseBdev2", 00:14:16.320 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:16.320 "is_configured": true, 00:14:16.320 "data_offset": 2048, 00:14:16.320 "data_size": 63488 00:14:16.320 }, 00:14:16.320 { 00:14:16.320 "name": "BaseBdev3", 00:14:16.320 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:16.320 "is_configured": true, 00:14:16.320 "data_offset": 2048, 00:14:16.320 "data_size": 63488 00:14:16.320 } 00:14:16.320 ] 00:14:16.320 }' 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.320 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.890 "name": "raid_bdev1", 00:14:16.890 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:16.890 "strip_size_kb": 64, 00:14:16.890 "state": "online", 00:14:16.890 "raid_level": "raid5f", 00:14:16.890 "superblock": true, 00:14:16.890 "num_base_bdevs": 3, 00:14:16.890 "num_base_bdevs_discovered": 2, 00:14:16.890 "num_base_bdevs_operational": 2, 00:14:16.890 "base_bdevs_list": [ 00:14:16.890 { 00:14:16.890 "name": null, 00:14:16.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.890 "is_configured": false, 00:14:16.890 "data_offset": 0, 00:14:16.890 "data_size": 63488 00:14:16.890 }, 00:14:16.890 { 00:14:16.890 "name": "BaseBdev2", 00:14:16.890 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:16.890 "is_configured": true, 00:14:16.890 "data_offset": 2048, 00:14:16.890 "data_size": 63488 00:14:16.890 }, 00:14:16.890 { 00:14:16.890 "name": "BaseBdev3", 00:14:16.890 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:16.890 "is_configured": true, 00:14:16.890 "data_offset": 2048, 00:14:16.890 "data_size": 63488 00:14:16.890 } 00:14:16.890 ] 00:14:16.890 }' 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.890 [2024-10-09 01:34:15.612128] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.890 [2024-10-09 01:34:15.617845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029120 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.890 01:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:16.890 [2024-10-09 01:34:15.620140] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.828 "name": "raid_bdev1", 00:14:17.828 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:17.828 "strip_size_kb": 64, 00:14:17.828 "state": "online", 00:14:17.828 "raid_level": "raid5f", 00:14:17.828 "superblock": true, 00:14:17.828 "num_base_bdevs": 3, 00:14:17.828 "num_base_bdevs_discovered": 3, 00:14:17.828 "num_base_bdevs_operational": 3, 00:14:17.828 "process": { 00:14:17.828 "type": "rebuild", 00:14:17.828 "target": "spare", 00:14:17.828 "progress": { 00:14:17.828 "blocks": 20480, 00:14:17.828 "percent": 16 00:14:17.828 } 00:14:17.828 }, 00:14:17.828 "base_bdevs_list": [ 00:14:17.828 { 00:14:17.828 "name": "spare", 00:14:17.828 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:17.828 "is_configured": true, 00:14:17.828 "data_offset": 2048, 00:14:17.828 "data_size": 63488 00:14:17.828 }, 00:14:17.828 { 00:14:17.828 "name": "BaseBdev2", 00:14:17.828 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:17.828 "is_configured": true, 00:14:17.828 "data_offset": 2048, 00:14:17.828 "data_size": 63488 00:14:17.828 }, 00:14:17.828 { 00:14:17.828 "name": "BaseBdev3", 00:14:17.828 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:17.828 "is_configured": true, 00:14:17.828 "data_offset": 2048, 00:14:17.828 "data_size": 63488 00:14:17.828 } 00:14:17.828 ] 00:14:17.828 }' 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.828 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:18.088 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=472 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.088 "name": "raid_bdev1", 00:14:18.088 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:18.088 "strip_size_kb": 64, 00:14:18.088 "state": "online", 00:14:18.088 "raid_level": "raid5f", 00:14:18.088 "superblock": true, 00:14:18.088 "num_base_bdevs": 3, 00:14:18.088 "num_base_bdevs_discovered": 3, 00:14:18.088 "num_base_bdevs_operational": 3, 00:14:18.088 "process": { 00:14:18.088 "type": "rebuild", 00:14:18.088 "target": "spare", 00:14:18.088 "progress": { 00:14:18.088 "blocks": 22528, 00:14:18.088 "percent": 17 00:14:18.088 } 00:14:18.088 }, 00:14:18.088 "base_bdevs_list": [ 00:14:18.088 { 00:14:18.088 "name": "spare", 00:14:18.088 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:18.088 "is_configured": true, 00:14:18.088 "data_offset": 2048, 00:14:18.088 "data_size": 63488 00:14:18.088 }, 00:14:18.088 { 00:14:18.088 "name": "BaseBdev2", 00:14:18.088 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:18.088 "is_configured": true, 00:14:18.088 "data_offset": 2048, 00:14:18.088 "data_size": 63488 00:14:18.088 }, 00:14:18.088 { 00:14:18.088 "name": "BaseBdev3", 00:14:18.088 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:18.088 "is_configured": true, 00:14:18.088 "data_offset": 2048, 00:14:18.088 "data_size": 63488 00:14:18.088 } 00:14:18.088 ] 00:14:18.088 }' 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.088 01:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.025 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.285 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.285 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.285 "name": "raid_bdev1", 00:14:19.285 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:19.285 "strip_size_kb": 64, 00:14:19.285 "state": "online", 00:14:19.285 "raid_level": "raid5f", 00:14:19.285 "superblock": true, 00:14:19.285 "num_base_bdevs": 3, 00:14:19.285 "num_base_bdevs_discovered": 3, 00:14:19.285 "num_base_bdevs_operational": 3, 00:14:19.285 "process": { 00:14:19.285 "type": "rebuild", 00:14:19.285 "target": "spare", 00:14:19.285 "progress": { 00:14:19.285 "blocks": 45056, 00:14:19.285 "percent": 35 00:14:19.285 } 00:14:19.285 }, 00:14:19.285 "base_bdevs_list": [ 00:14:19.285 { 00:14:19.285 "name": "spare", 00:14:19.285 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:19.285 "is_configured": true, 00:14:19.285 "data_offset": 2048, 00:14:19.285 "data_size": 63488 00:14:19.285 }, 00:14:19.285 { 00:14:19.285 "name": "BaseBdev2", 00:14:19.285 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:19.285 "is_configured": true, 00:14:19.285 "data_offset": 2048, 00:14:19.285 "data_size": 63488 00:14:19.285 }, 00:14:19.285 { 00:14:19.285 "name": "BaseBdev3", 00:14:19.285 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:19.285 "is_configured": true, 00:14:19.285 "data_offset": 2048, 00:14:19.285 "data_size": 63488 00:14:19.285 } 00:14:19.285 ] 00:14:19.285 }' 00:14:19.285 01:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.285 01:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.285 01:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.285 01:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.285 01:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.223 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.496 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.496 "name": "raid_bdev1", 00:14:20.496 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:20.496 "strip_size_kb": 64, 00:14:20.496 "state": "online", 00:14:20.496 "raid_level": "raid5f", 00:14:20.496 "superblock": true, 00:14:20.496 "num_base_bdevs": 3, 00:14:20.496 "num_base_bdevs_discovered": 3, 00:14:20.496 "num_base_bdevs_operational": 3, 00:14:20.496 "process": { 00:14:20.496 "type": "rebuild", 00:14:20.496 "target": "spare", 00:14:20.496 "progress": { 00:14:20.496 "blocks": 69632, 00:14:20.496 "percent": 54 00:14:20.496 } 00:14:20.496 }, 00:14:20.496 "base_bdevs_list": [ 00:14:20.496 { 00:14:20.496 "name": "spare", 00:14:20.496 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:20.496 "is_configured": true, 00:14:20.496 "data_offset": 2048, 00:14:20.496 "data_size": 63488 00:14:20.496 }, 00:14:20.496 { 00:14:20.496 "name": "BaseBdev2", 00:14:20.496 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:20.496 "is_configured": true, 00:14:20.496 "data_offset": 2048, 00:14:20.496 "data_size": 63488 00:14:20.496 }, 00:14:20.496 { 00:14:20.496 "name": "BaseBdev3", 00:14:20.496 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:20.496 "is_configured": true, 00:14:20.496 "data_offset": 2048, 00:14:20.496 "data_size": 63488 00:14:20.496 } 00:14:20.496 ] 00:14:20.496 }' 00:14:20.496 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.496 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.496 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.496 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.496 01:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.497 "name": "raid_bdev1", 00:14:21.497 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:21.497 "strip_size_kb": 64, 00:14:21.497 "state": "online", 00:14:21.497 "raid_level": "raid5f", 00:14:21.497 "superblock": true, 00:14:21.497 "num_base_bdevs": 3, 00:14:21.497 "num_base_bdevs_discovered": 3, 00:14:21.497 "num_base_bdevs_operational": 3, 00:14:21.497 "process": { 00:14:21.497 "type": "rebuild", 00:14:21.497 "target": "spare", 00:14:21.497 "progress": { 00:14:21.497 "blocks": 92160, 00:14:21.497 "percent": 72 00:14:21.497 } 00:14:21.497 }, 00:14:21.497 "base_bdevs_list": [ 00:14:21.497 { 00:14:21.497 "name": "spare", 00:14:21.497 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:21.497 "is_configured": true, 00:14:21.497 "data_offset": 2048, 00:14:21.497 "data_size": 63488 00:14:21.497 }, 00:14:21.497 { 00:14:21.497 "name": "BaseBdev2", 00:14:21.497 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:21.497 "is_configured": true, 00:14:21.497 "data_offset": 2048, 00:14:21.497 "data_size": 63488 00:14:21.497 }, 00:14:21.497 { 00:14:21.497 "name": "BaseBdev3", 00:14:21.497 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:21.497 "is_configured": true, 00:14:21.497 "data_offset": 2048, 00:14:21.497 "data_size": 63488 00:14:21.497 } 00:14:21.497 ] 00:14:21.497 }' 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.497 01:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.877 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.877 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.877 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.878 "name": "raid_bdev1", 00:14:22.878 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:22.878 "strip_size_kb": 64, 00:14:22.878 "state": "online", 00:14:22.878 "raid_level": "raid5f", 00:14:22.878 "superblock": true, 00:14:22.878 "num_base_bdevs": 3, 00:14:22.878 "num_base_bdevs_discovered": 3, 00:14:22.878 "num_base_bdevs_operational": 3, 00:14:22.878 "process": { 00:14:22.878 "type": "rebuild", 00:14:22.878 "target": "spare", 00:14:22.878 "progress": { 00:14:22.878 "blocks": 114688, 00:14:22.878 "percent": 90 00:14:22.878 } 00:14:22.878 }, 00:14:22.878 "base_bdevs_list": [ 00:14:22.878 { 00:14:22.878 "name": "spare", 00:14:22.878 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:22.878 "is_configured": true, 00:14:22.878 "data_offset": 2048, 00:14:22.878 "data_size": 63488 00:14:22.878 }, 00:14:22.878 { 00:14:22.878 "name": "BaseBdev2", 00:14:22.878 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:22.878 "is_configured": true, 00:14:22.878 "data_offset": 2048, 00:14:22.878 "data_size": 63488 00:14:22.878 }, 00:14:22.878 { 00:14:22.878 "name": "BaseBdev3", 00:14:22.878 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:22.878 "is_configured": true, 00:14:22.878 "data_offset": 2048, 00:14:22.878 "data_size": 63488 00:14:22.878 } 00:14:22.878 ] 00:14:22.878 }' 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.878 01:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.137 [2024-10-09 01:34:21.870136] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:23.138 [2024-10-09 01:34:21.870209] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:23.138 [2024-10-09 01:34:21.870309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.707 "name": "raid_bdev1", 00:14:23.707 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:23.707 "strip_size_kb": 64, 00:14:23.707 "state": "online", 00:14:23.707 "raid_level": "raid5f", 00:14:23.707 "superblock": true, 00:14:23.707 "num_base_bdevs": 3, 00:14:23.707 "num_base_bdevs_discovered": 3, 00:14:23.707 "num_base_bdevs_operational": 3, 00:14:23.707 "base_bdevs_list": [ 00:14:23.707 { 00:14:23.707 "name": "spare", 00:14:23.707 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:23.707 "is_configured": true, 00:14:23.707 "data_offset": 2048, 00:14:23.707 "data_size": 63488 00:14:23.707 }, 00:14:23.707 { 00:14:23.707 "name": "BaseBdev2", 00:14:23.707 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:23.707 "is_configured": true, 00:14:23.707 "data_offset": 2048, 00:14:23.707 "data_size": 63488 00:14:23.707 }, 00:14:23.707 { 00:14:23.707 "name": "BaseBdev3", 00:14:23.707 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:23.707 "is_configured": true, 00:14:23.707 "data_offset": 2048, 00:14:23.707 "data_size": 63488 00:14:23.707 } 00:14:23.707 ] 00:14:23.707 }' 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:23.707 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.967 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.967 "name": "raid_bdev1", 00:14:23.967 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:23.967 "strip_size_kb": 64, 00:14:23.967 "state": "online", 00:14:23.967 "raid_level": "raid5f", 00:14:23.967 "superblock": true, 00:14:23.967 "num_base_bdevs": 3, 00:14:23.967 "num_base_bdevs_discovered": 3, 00:14:23.967 "num_base_bdevs_operational": 3, 00:14:23.967 "base_bdevs_list": [ 00:14:23.967 { 00:14:23.967 "name": "spare", 00:14:23.967 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:23.967 "is_configured": true, 00:14:23.967 "data_offset": 2048, 00:14:23.967 "data_size": 63488 00:14:23.967 }, 00:14:23.967 { 00:14:23.967 "name": "BaseBdev2", 00:14:23.967 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:23.967 "is_configured": true, 00:14:23.967 "data_offset": 2048, 00:14:23.967 "data_size": 63488 00:14:23.967 }, 00:14:23.968 { 00:14:23.968 "name": "BaseBdev3", 00:14:23.968 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:23.968 "is_configured": true, 00:14:23.968 "data_offset": 2048, 00:14:23.968 "data_size": 63488 00:14:23.968 } 00:14:23.968 ] 00:14:23.968 }' 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.968 "name": "raid_bdev1", 00:14:23.968 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:23.968 "strip_size_kb": 64, 00:14:23.968 "state": "online", 00:14:23.968 "raid_level": "raid5f", 00:14:23.968 "superblock": true, 00:14:23.968 "num_base_bdevs": 3, 00:14:23.968 "num_base_bdevs_discovered": 3, 00:14:23.968 "num_base_bdevs_operational": 3, 00:14:23.968 "base_bdevs_list": [ 00:14:23.968 { 00:14:23.968 "name": "spare", 00:14:23.968 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:23.968 "is_configured": true, 00:14:23.968 "data_offset": 2048, 00:14:23.968 "data_size": 63488 00:14:23.968 }, 00:14:23.968 { 00:14:23.968 "name": "BaseBdev2", 00:14:23.968 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:23.968 "is_configured": true, 00:14:23.968 "data_offset": 2048, 00:14:23.968 "data_size": 63488 00:14:23.968 }, 00:14:23.968 { 00:14:23.968 "name": "BaseBdev3", 00:14:23.968 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:23.968 "is_configured": true, 00:14:23.968 "data_offset": 2048, 00:14:23.968 "data_size": 63488 00:14:23.968 } 00:14:23.968 ] 00:14:23.968 }' 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.968 01:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.537 [2024-10-09 01:34:23.203179] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.537 [2024-10-09 01:34:23.203209] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.537 [2024-10-09 01:34:23.203290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.537 [2024-10-09 01:34:23.203368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.537 [2024-10-09 01:34:23.203384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.537 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:24.797 /dev/nbd0 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.797 1+0 records in 00:14:24.797 1+0 records out 00:14:24.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352085 s, 11.6 MB/s 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.797 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:25.057 /dev/nbd1 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.057 1+0 records in 00:14:25.057 1+0 records out 00:14:25.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313109 s, 13.1 MB/s 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.057 01:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.317 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.577 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.577 [2024-10-09 01:34:24.302915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:25.577 [2024-10-09 01:34:24.302981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.577 [2024-10-09 01:34:24.303004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:25.577 [2024-10-09 01:34:24.303016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.577 [2024-10-09 01:34:24.305715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.577 [2024-10-09 01:34:24.305752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:25.578 [2024-10-09 01:34:24.305829] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:25.578 [2024-10-09 01:34:24.305877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.578 [2024-10-09 01:34:24.306014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.578 [2024-10-09 01:34:24.306110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.578 spare 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.578 [2024-10-09 01:34:24.406179] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:25.578 [2024-10-09 01:34:24.406214] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:25.578 [2024-10-09 01:34:24.406480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:14:25.578 [2024-10-09 01:34:24.406929] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:25.578 [2024-10-09 01:34:24.406947] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:25.578 [2024-10-09 01:34:24.407107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.578 "name": "raid_bdev1", 00:14:25.578 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:25.578 "strip_size_kb": 64, 00:14:25.578 "state": "online", 00:14:25.578 "raid_level": "raid5f", 00:14:25.578 "superblock": true, 00:14:25.578 "num_base_bdevs": 3, 00:14:25.578 "num_base_bdevs_discovered": 3, 00:14:25.578 "num_base_bdevs_operational": 3, 00:14:25.578 "base_bdevs_list": [ 00:14:25.578 { 00:14:25.578 "name": "spare", 00:14:25.578 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:25.578 "is_configured": true, 00:14:25.578 "data_offset": 2048, 00:14:25.578 "data_size": 63488 00:14:25.578 }, 00:14:25.578 { 00:14:25.578 "name": "BaseBdev2", 00:14:25.578 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:25.578 "is_configured": true, 00:14:25.578 "data_offset": 2048, 00:14:25.578 "data_size": 63488 00:14:25.578 }, 00:14:25.578 { 00:14:25.578 "name": "BaseBdev3", 00:14:25.578 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:25.578 "is_configured": true, 00:14:25.578 "data_offset": 2048, 00:14:25.578 "data_size": 63488 00:14:25.578 } 00:14:25.578 ] 00:14:25.578 }' 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.578 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.147 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.147 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.148 "name": "raid_bdev1", 00:14:26.148 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:26.148 "strip_size_kb": 64, 00:14:26.148 "state": "online", 00:14:26.148 "raid_level": "raid5f", 00:14:26.148 "superblock": true, 00:14:26.148 "num_base_bdevs": 3, 00:14:26.148 "num_base_bdevs_discovered": 3, 00:14:26.148 "num_base_bdevs_operational": 3, 00:14:26.148 "base_bdevs_list": [ 00:14:26.148 { 00:14:26.148 "name": "spare", 00:14:26.148 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:26.148 "is_configured": true, 00:14:26.148 "data_offset": 2048, 00:14:26.148 "data_size": 63488 00:14:26.148 }, 00:14:26.148 { 00:14:26.148 "name": "BaseBdev2", 00:14:26.148 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:26.148 "is_configured": true, 00:14:26.148 "data_offset": 2048, 00:14:26.148 "data_size": 63488 00:14:26.148 }, 00:14:26.148 { 00:14:26.148 "name": "BaseBdev3", 00:14:26.148 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:26.148 "is_configured": true, 00:14:26.148 "data_offset": 2048, 00:14:26.148 "data_size": 63488 00:14:26.148 } 00:14:26.148 ] 00:14:26.148 }' 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.148 01:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.148 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.148 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.148 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:26.148 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.148 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.148 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.408 [2024-10-09 01:34:25.063260] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.408 "name": "raid_bdev1", 00:14:26.408 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:26.408 "strip_size_kb": 64, 00:14:26.408 "state": "online", 00:14:26.408 "raid_level": "raid5f", 00:14:26.408 "superblock": true, 00:14:26.408 "num_base_bdevs": 3, 00:14:26.408 "num_base_bdevs_discovered": 2, 00:14:26.408 "num_base_bdevs_operational": 2, 00:14:26.408 "base_bdevs_list": [ 00:14:26.408 { 00:14:26.408 "name": null, 00:14:26.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.408 "is_configured": false, 00:14:26.408 "data_offset": 0, 00:14:26.408 "data_size": 63488 00:14:26.408 }, 00:14:26.408 { 00:14:26.408 "name": "BaseBdev2", 00:14:26.408 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:26.408 "is_configured": true, 00:14:26.408 "data_offset": 2048, 00:14:26.408 "data_size": 63488 00:14:26.408 }, 00:14:26.408 { 00:14:26.408 "name": "BaseBdev3", 00:14:26.408 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:26.408 "is_configured": true, 00:14:26.408 "data_offset": 2048, 00:14:26.408 "data_size": 63488 00:14:26.408 } 00:14:26.408 ] 00:14:26.408 }' 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.408 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.668 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.668 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.668 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.668 [2024-10-09 01:34:25.519389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.668 [2024-10-09 01:34:25.519543] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:26.668 [2024-10-09 01:34:25.519564] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:26.668 [2024-10-09 01:34:25.519609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.668 [2024-10-09 01:34:25.525971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:14:26.668 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.668 01:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:26.668 [2024-10-09 01:34:25.528381] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.048 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.049 "name": "raid_bdev1", 00:14:28.049 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:28.049 "strip_size_kb": 64, 00:14:28.049 "state": "online", 00:14:28.049 "raid_level": "raid5f", 00:14:28.049 "superblock": true, 00:14:28.049 "num_base_bdevs": 3, 00:14:28.049 "num_base_bdevs_discovered": 3, 00:14:28.049 "num_base_bdevs_operational": 3, 00:14:28.049 "process": { 00:14:28.049 "type": "rebuild", 00:14:28.049 "target": "spare", 00:14:28.049 "progress": { 00:14:28.049 "blocks": 20480, 00:14:28.049 "percent": 16 00:14:28.049 } 00:14:28.049 }, 00:14:28.049 "base_bdevs_list": [ 00:14:28.049 { 00:14:28.049 "name": "spare", 00:14:28.049 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:28.049 "is_configured": true, 00:14:28.049 "data_offset": 2048, 00:14:28.049 "data_size": 63488 00:14:28.049 }, 00:14:28.049 { 00:14:28.049 "name": "BaseBdev2", 00:14:28.049 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:28.049 "is_configured": true, 00:14:28.049 "data_offset": 2048, 00:14:28.049 "data_size": 63488 00:14:28.049 }, 00:14:28.049 { 00:14:28.049 "name": "BaseBdev3", 00:14:28.049 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:28.049 "is_configured": true, 00:14:28.049 "data_offset": 2048, 00:14:28.049 "data_size": 63488 00:14:28.049 } 00:14:28.049 ] 00:14:28.049 }' 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.049 [2024-10-09 01:34:26.666437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.049 [2024-10-09 01:34:26.738613] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:28.049 [2024-10-09 01:34:26.739036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.049 [2024-10-09 01:34:26.739065] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:28.049 [2024-10-09 01:34:26.739080] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.049 "name": "raid_bdev1", 00:14:28.049 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:28.049 "strip_size_kb": 64, 00:14:28.049 "state": "online", 00:14:28.049 "raid_level": "raid5f", 00:14:28.049 "superblock": true, 00:14:28.049 "num_base_bdevs": 3, 00:14:28.049 "num_base_bdevs_discovered": 2, 00:14:28.049 "num_base_bdevs_operational": 2, 00:14:28.049 "base_bdevs_list": [ 00:14:28.049 { 00:14:28.049 "name": null, 00:14:28.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.049 "is_configured": false, 00:14:28.049 "data_offset": 0, 00:14:28.049 "data_size": 63488 00:14:28.049 }, 00:14:28.049 { 00:14:28.049 "name": "BaseBdev2", 00:14:28.049 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:28.049 "is_configured": true, 00:14:28.049 "data_offset": 2048, 00:14:28.049 "data_size": 63488 00:14:28.049 }, 00:14:28.049 { 00:14:28.049 "name": "BaseBdev3", 00:14:28.049 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:28.049 "is_configured": true, 00:14:28.049 "data_offset": 2048, 00:14:28.049 "data_size": 63488 00:14:28.049 } 00:14:28.049 ] 00:14:28.049 }' 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.049 01:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.309 01:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:28.309 01:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.309 01:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.309 [2024-10-09 01:34:27.155656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:28.309 [2024-10-09 01:34:27.156004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.309 [2024-10-09 01:34:27.156080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:28.309 [2024-10-09 01:34:27.156145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.309 [2024-10-09 01:34:27.156703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.309 [2024-10-09 01:34:27.156810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:28.309 [2024-10-09 01:34:27.156941] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:28.309 [2024-10-09 01:34:27.156966] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:28.309 [2024-10-09 01:34:27.156989] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:28.309 [2024-10-09 01:34:27.157073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.309 [2024-10-09 01:34:27.161836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047970 00:14:28.309 spare 00:14:28.309 01:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.309 01:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:28.309 [2024-10-09 01:34:27.164260] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.690 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.690 "name": "raid_bdev1", 00:14:29.690 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:29.690 "strip_size_kb": 64, 00:14:29.690 "state": "online", 00:14:29.690 "raid_level": "raid5f", 00:14:29.690 "superblock": true, 00:14:29.690 "num_base_bdevs": 3, 00:14:29.690 "num_base_bdevs_discovered": 3, 00:14:29.690 "num_base_bdevs_operational": 3, 00:14:29.690 "process": { 00:14:29.690 "type": "rebuild", 00:14:29.690 "target": "spare", 00:14:29.690 "progress": { 00:14:29.690 "blocks": 20480, 00:14:29.690 "percent": 16 00:14:29.690 } 00:14:29.690 }, 00:14:29.690 "base_bdevs_list": [ 00:14:29.690 { 00:14:29.690 "name": "spare", 00:14:29.691 "uuid": "b192dc1c-850e-5e49-bd2a-97437989ac4a", 00:14:29.691 "is_configured": true, 00:14:29.691 "data_offset": 2048, 00:14:29.691 "data_size": 63488 00:14:29.691 }, 00:14:29.691 { 00:14:29.691 "name": "BaseBdev2", 00:14:29.691 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:29.691 "is_configured": true, 00:14:29.691 "data_offset": 2048, 00:14:29.691 "data_size": 63488 00:14:29.691 }, 00:14:29.691 { 00:14:29.691 "name": "BaseBdev3", 00:14:29.691 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:29.691 "is_configured": true, 00:14:29.691 "data_offset": 2048, 00:14:29.691 "data_size": 63488 00:14:29.691 } 00:14:29.691 ] 00:14:29.691 }' 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.691 [2024-10-09 01:34:28.302186] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.691 [2024-10-09 01:34:28.374384] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.691 [2024-10-09 01:34:28.374430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.691 [2024-10-09 01:34:28.374451] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.691 [2024-10-09 01:34:28.374458] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.691 "name": "raid_bdev1", 00:14:29.691 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:29.691 "strip_size_kb": 64, 00:14:29.691 "state": "online", 00:14:29.691 "raid_level": "raid5f", 00:14:29.691 "superblock": true, 00:14:29.691 "num_base_bdevs": 3, 00:14:29.691 "num_base_bdevs_discovered": 2, 00:14:29.691 "num_base_bdevs_operational": 2, 00:14:29.691 "base_bdevs_list": [ 00:14:29.691 { 00:14:29.691 "name": null, 00:14:29.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.691 "is_configured": false, 00:14:29.691 "data_offset": 0, 00:14:29.691 "data_size": 63488 00:14:29.691 }, 00:14:29.691 { 00:14:29.691 "name": "BaseBdev2", 00:14:29.691 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:29.691 "is_configured": true, 00:14:29.691 "data_offset": 2048, 00:14:29.691 "data_size": 63488 00:14:29.691 }, 00:14:29.691 { 00:14:29.691 "name": "BaseBdev3", 00:14:29.691 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:29.691 "is_configured": true, 00:14:29.691 "data_offset": 2048, 00:14:29.691 "data_size": 63488 00:14:29.691 } 00:14:29.691 ] 00:14:29.691 }' 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.691 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.950 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.210 "name": "raid_bdev1", 00:14:30.210 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:30.210 "strip_size_kb": 64, 00:14:30.210 "state": "online", 00:14:30.210 "raid_level": "raid5f", 00:14:30.210 "superblock": true, 00:14:30.210 "num_base_bdevs": 3, 00:14:30.210 "num_base_bdevs_discovered": 2, 00:14:30.210 "num_base_bdevs_operational": 2, 00:14:30.210 "base_bdevs_list": [ 00:14:30.210 { 00:14:30.210 "name": null, 00:14:30.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.210 "is_configured": false, 00:14:30.210 "data_offset": 0, 00:14:30.210 "data_size": 63488 00:14:30.210 }, 00:14:30.210 { 00:14:30.210 "name": "BaseBdev2", 00:14:30.210 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:30.210 "is_configured": true, 00:14:30.210 "data_offset": 2048, 00:14:30.210 "data_size": 63488 00:14:30.210 }, 00:14:30.210 { 00:14:30.210 "name": "BaseBdev3", 00:14:30.210 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:30.210 "is_configured": true, 00:14:30.210 "data_offset": 2048, 00:14:30.210 "data_size": 63488 00:14:30.210 } 00:14:30.210 ] 00:14:30.210 }' 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.210 01:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.210 [2024-10-09 01:34:28.998834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:30.210 [2024-10-09 01:34:28.998881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.210 [2024-10-09 01:34:28.998908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:30.210 [2024-10-09 01:34:28.998917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.210 [2024-10-09 01:34:28.999368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.210 [2024-10-09 01:34:28.999392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.210 [2024-10-09 01:34:28.999466] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:30.210 [2024-10-09 01:34:28.999479] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:30.210 [2024-10-09 01:34:28.999490] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:30.210 [2024-10-09 01:34:28.999501] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:30.210 BaseBdev1 00:14:30.210 01:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.210 01:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.149 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.409 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.409 "name": "raid_bdev1", 00:14:31.409 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:31.409 "strip_size_kb": 64, 00:14:31.409 "state": "online", 00:14:31.409 "raid_level": "raid5f", 00:14:31.409 "superblock": true, 00:14:31.409 "num_base_bdevs": 3, 00:14:31.409 "num_base_bdevs_discovered": 2, 00:14:31.409 "num_base_bdevs_operational": 2, 00:14:31.409 "base_bdevs_list": [ 00:14:31.409 { 00:14:31.409 "name": null, 00:14:31.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.409 "is_configured": false, 00:14:31.409 "data_offset": 0, 00:14:31.409 "data_size": 63488 00:14:31.409 }, 00:14:31.409 { 00:14:31.409 "name": "BaseBdev2", 00:14:31.409 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:31.409 "is_configured": true, 00:14:31.409 "data_offset": 2048, 00:14:31.409 "data_size": 63488 00:14:31.409 }, 00:14:31.409 { 00:14:31.409 "name": "BaseBdev3", 00:14:31.409 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:31.409 "is_configured": true, 00:14:31.409 "data_offset": 2048, 00:14:31.409 "data_size": 63488 00:14:31.409 } 00:14:31.409 ] 00:14:31.409 }' 00:14:31.409 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.409 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.669 "name": "raid_bdev1", 00:14:31.669 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:31.669 "strip_size_kb": 64, 00:14:31.669 "state": "online", 00:14:31.669 "raid_level": "raid5f", 00:14:31.669 "superblock": true, 00:14:31.669 "num_base_bdevs": 3, 00:14:31.669 "num_base_bdevs_discovered": 2, 00:14:31.669 "num_base_bdevs_operational": 2, 00:14:31.669 "base_bdevs_list": [ 00:14:31.669 { 00:14:31.669 "name": null, 00:14:31.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.669 "is_configured": false, 00:14:31.669 "data_offset": 0, 00:14:31.669 "data_size": 63488 00:14:31.669 }, 00:14:31.669 { 00:14:31.669 "name": "BaseBdev2", 00:14:31.669 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:31.669 "is_configured": true, 00:14:31.669 "data_offset": 2048, 00:14:31.669 "data_size": 63488 00:14:31.669 }, 00:14:31.669 { 00:14:31.669 "name": "BaseBdev3", 00:14:31.669 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:31.669 "is_configured": true, 00:14:31.669 "data_offset": 2048, 00:14:31.669 "data_size": 63488 00:14:31.669 } 00:14:31.669 ] 00:14:31.669 }' 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.669 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.928 [2024-10-09 01:34:30.591282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.928 [2024-10-09 01:34:30.591442] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:31.928 [2024-10-09 01:34:30.591471] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:31.928 request: 00:14:31.928 { 00:14:31.928 "base_bdev": "BaseBdev1", 00:14:31.928 "raid_bdev": "raid_bdev1", 00:14:31.928 "method": "bdev_raid_add_base_bdev", 00:14:31.928 "req_id": 1 00:14:31.928 } 00:14:31.928 Got JSON-RPC error response 00:14:31.928 response: 00:14:31.928 { 00:14:31.928 "code": -22, 00:14:31.928 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:31.928 } 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.928 01:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.866 "name": "raid_bdev1", 00:14:32.866 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:32.866 "strip_size_kb": 64, 00:14:32.866 "state": "online", 00:14:32.866 "raid_level": "raid5f", 00:14:32.866 "superblock": true, 00:14:32.866 "num_base_bdevs": 3, 00:14:32.866 "num_base_bdevs_discovered": 2, 00:14:32.866 "num_base_bdevs_operational": 2, 00:14:32.866 "base_bdevs_list": [ 00:14:32.866 { 00:14:32.866 "name": null, 00:14:32.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.866 "is_configured": false, 00:14:32.866 "data_offset": 0, 00:14:32.866 "data_size": 63488 00:14:32.866 }, 00:14:32.866 { 00:14:32.866 "name": "BaseBdev2", 00:14:32.866 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:32.866 "is_configured": true, 00:14:32.866 "data_offset": 2048, 00:14:32.866 "data_size": 63488 00:14:32.866 }, 00:14:32.866 { 00:14:32.866 "name": "BaseBdev3", 00:14:32.866 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:32.866 "is_configured": true, 00:14:32.866 "data_offset": 2048, 00:14:32.866 "data_size": 63488 00:14:32.866 } 00:14:32.866 ] 00:14:32.866 }' 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.866 01:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.436 "name": "raid_bdev1", 00:14:33.436 "uuid": "c194376b-84a3-49c5-9b40-d88cf71185fd", 00:14:33.436 "strip_size_kb": 64, 00:14:33.436 "state": "online", 00:14:33.436 "raid_level": "raid5f", 00:14:33.436 "superblock": true, 00:14:33.436 "num_base_bdevs": 3, 00:14:33.436 "num_base_bdevs_discovered": 2, 00:14:33.436 "num_base_bdevs_operational": 2, 00:14:33.436 "base_bdevs_list": [ 00:14:33.436 { 00:14:33.436 "name": null, 00:14:33.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.436 "is_configured": false, 00:14:33.436 "data_offset": 0, 00:14:33.436 "data_size": 63488 00:14:33.436 }, 00:14:33.436 { 00:14:33.436 "name": "BaseBdev2", 00:14:33.436 "uuid": "306d8428-1814-539b-9b17-12d33b62308d", 00:14:33.436 "is_configured": true, 00:14:33.436 "data_offset": 2048, 00:14:33.436 "data_size": 63488 00:14:33.436 }, 00:14:33.436 { 00:14:33.436 "name": "BaseBdev3", 00:14:33.436 "uuid": "0dc2ddeb-f69d-5f85-afd2-ea675e1b8a86", 00:14:33.436 "is_configured": true, 00:14:33.436 "data_offset": 2048, 00:14:33.436 "data_size": 63488 00:14:33.436 } 00:14:33.436 ] 00:14:33.436 }' 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 93661 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93661 ']' 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 93661 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93661 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.436 killing process with pid 93661 00:14:33.436 Received shutdown signal, test time was about 60.000000 seconds 00:14:33.436 00:14:33.436 Latency(us) 00:14:33.436 [2024-10-09T01:34:32.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.436 [2024-10-09T01:34:32.329Z] =================================================================================================================== 00:14:33.436 [2024-10-09T01:34:32.329Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93661' 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 93661 00:14:33.436 [2024-10-09 01:34:32.192820] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.436 [2024-10-09 01:34:32.192960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.436 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 93661 00:14:33.436 [2024-10-09 01:34:32.193025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.436 [2024-10-09 01:34:32.193039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:33.436 [2024-10-09 01:34:32.267895] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.006 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:34.006 00:14:34.006 real 0m21.708s 00:14:34.006 user 0m27.904s 00:14:34.006 sys 0m2.936s 00:14:34.006 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.006 ************************************ 00:14:34.006 END TEST raid5f_rebuild_test_sb 00:14:34.006 ************************************ 00:14:34.006 01:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.007 01:34:32 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:34.007 01:34:32 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:34.007 01:34:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:34.007 01:34:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.007 01:34:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.007 ************************************ 00:14:34.007 START TEST raid5f_state_function_test 00:14:34.007 ************************************ 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=94399 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94399' 00:14:34.007 Process raid pid: 94399 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 94399 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 94399 ']' 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.007 01:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.007 [2024-10-09 01:34:32.813634] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:34.007 [2024-10-09 01:34:32.813834] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.267 [2024-10-09 01:34:32.947587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:34.267 [2024-10-09 01:34:32.977229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.267 [2024-10-09 01:34:33.048684] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.267 [2024-10-09 01:34:33.124449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.267 [2024-10-09 01:34:33.124509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.837 [2024-10-09 01:34:33.628847] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.837 [2024-10-09 01:34:33.628902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.837 [2024-10-09 01:34:33.628922] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.837 [2024-10-09 01:34:33.628930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.837 [2024-10-09 01:34:33.628942] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.837 [2024-10-09 01:34:33.628948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.837 [2024-10-09 01:34:33.628956] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:34.837 [2024-10-09 01:34:33.628963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.837 "name": "Existed_Raid", 00:14:34.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.837 "strip_size_kb": 64, 00:14:34.837 "state": "configuring", 00:14:34.837 "raid_level": "raid5f", 00:14:34.837 "superblock": false, 00:14:34.837 "num_base_bdevs": 4, 00:14:34.837 "num_base_bdevs_discovered": 0, 00:14:34.837 "num_base_bdevs_operational": 4, 00:14:34.837 "base_bdevs_list": [ 00:14:34.837 { 00:14:34.837 "name": "BaseBdev1", 00:14:34.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.837 "is_configured": false, 00:14:34.837 "data_offset": 0, 00:14:34.837 "data_size": 0 00:14:34.837 }, 00:14:34.837 { 00:14:34.837 "name": "BaseBdev2", 00:14:34.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.837 "is_configured": false, 00:14:34.837 "data_offset": 0, 00:14:34.837 "data_size": 0 00:14:34.837 }, 00:14:34.837 { 00:14:34.837 "name": "BaseBdev3", 00:14:34.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.837 "is_configured": false, 00:14:34.837 "data_offset": 0, 00:14:34.837 "data_size": 0 00:14:34.837 }, 00:14:34.837 { 00:14:34.837 "name": "BaseBdev4", 00:14:34.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.837 "is_configured": false, 00:14:34.837 "data_offset": 0, 00:14:34.837 "data_size": 0 00:14:34.837 } 00:14:34.837 ] 00:14:34.837 }' 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.837 01:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.407 [2024-10-09 01:34:34.124861] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.407 [2024-10-09 01:34:34.124959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.407 [2024-10-09 01:34:34.136878] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.407 [2024-10-09 01:34:34.136952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.407 [2024-10-09 01:34:34.136980] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.407 [2024-10-09 01:34:34.137000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.407 [2024-10-09 01:34:34.137020] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.407 [2024-10-09 01:34:34.137038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.407 [2024-10-09 01:34:34.137057] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.407 [2024-10-09 01:34:34.137075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.407 [2024-10-09 01:34:34.163586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.407 BaseBdev1 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.407 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.407 [ 00:14:35.407 { 00:14:35.407 "name": "BaseBdev1", 00:14:35.407 "aliases": [ 00:14:35.407 "72e925eb-da90-460c-84d3-13d1af8049cb" 00:14:35.407 ], 00:14:35.407 "product_name": "Malloc disk", 00:14:35.407 "block_size": 512, 00:14:35.407 "num_blocks": 65536, 00:14:35.407 "uuid": "72e925eb-da90-460c-84d3-13d1af8049cb", 00:14:35.407 "assigned_rate_limits": { 00:14:35.407 "rw_ios_per_sec": 0, 00:14:35.407 "rw_mbytes_per_sec": 0, 00:14:35.407 "r_mbytes_per_sec": 0, 00:14:35.407 "w_mbytes_per_sec": 0 00:14:35.407 }, 00:14:35.407 "claimed": true, 00:14:35.407 "claim_type": "exclusive_write", 00:14:35.407 "zoned": false, 00:14:35.407 "supported_io_types": { 00:14:35.407 "read": true, 00:14:35.407 "write": true, 00:14:35.407 "unmap": true, 00:14:35.407 "flush": true, 00:14:35.407 "reset": true, 00:14:35.407 "nvme_admin": false, 00:14:35.407 "nvme_io": false, 00:14:35.407 "nvme_io_md": false, 00:14:35.407 "write_zeroes": true, 00:14:35.407 "zcopy": true, 00:14:35.407 "get_zone_info": false, 00:14:35.407 "zone_management": false, 00:14:35.407 "zone_append": false, 00:14:35.407 "compare": false, 00:14:35.407 "compare_and_write": false, 00:14:35.407 "abort": true, 00:14:35.407 "seek_hole": false, 00:14:35.407 "seek_data": false, 00:14:35.407 "copy": true, 00:14:35.407 "nvme_iov_md": false 00:14:35.407 }, 00:14:35.407 "memory_domains": [ 00:14:35.407 { 00:14:35.407 "dma_device_id": "system", 00:14:35.407 "dma_device_type": 1 00:14:35.407 }, 00:14:35.408 { 00:14:35.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.408 "dma_device_type": 2 00:14:35.408 } 00:14:35.408 ], 00:14:35.408 "driver_specific": {} 00:14:35.408 } 00:14:35.408 ] 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.408 "name": "Existed_Raid", 00:14:35.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.408 "strip_size_kb": 64, 00:14:35.408 "state": "configuring", 00:14:35.408 "raid_level": "raid5f", 00:14:35.408 "superblock": false, 00:14:35.408 "num_base_bdevs": 4, 00:14:35.408 "num_base_bdevs_discovered": 1, 00:14:35.408 "num_base_bdevs_operational": 4, 00:14:35.408 "base_bdevs_list": [ 00:14:35.408 { 00:14:35.408 "name": "BaseBdev1", 00:14:35.408 "uuid": "72e925eb-da90-460c-84d3-13d1af8049cb", 00:14:35.408 "is_configured": true, 00:14:35.408 "data_offset": 0, 00:14:35.408 "data_size": 65536 00:14:35.408 }, 00:14:35.408 { 00:14:35.408 "name": "BaseBdev2", 00:14:35.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.408 "is_configured": false, 00:14:35.408 "data_offset": 0, 00:14:35.408 "data_size": 0 00:14:35.408 }, 00:14:35.408 { 00:14:35.408 "name": "BaseBdev3", 00:14:35.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.408 "is_configured": false, 00:14:35.408 "data_offset": 0, 00:14:35.408 "data_size": 0 00:14:35.408 }, 00:14:35.408 { 00:14:35.408 "name": "BaseBdev4", 00:14:35.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.408 "is_configured": false, 00:14:35.408 "data_offset": 0, 00:14:35.408 "data_size": 0 00:14:35.408 } 00:14:35.408 ] 00:14:35.408 }' 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.408 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.977 [2024-10-09 01:34:34.667742] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.977 [2024-10-09 01:34:34.667842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.977 [2024-10-09 01:34:34.679778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.977 [2024-10-09 01:34:34.681897] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.977 [2024-10-09 01:34:34.681981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.977 [2024-10-09 01:34:34.682022] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.977 [2024-10-09 01:34:34.682041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.977 [2024-10-09 01:34:34.682060] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.977 [2024-10-09 01:34:34.682079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.977 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.978 "name": "Existed_Raid", 00:14:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.978 "strip_size_kb": 64, 00:14:35.978 "state": "configuring", 00:14:35.978 "raid_level": "raid5f", 00:14:35.978 "superblock": false, 00:14:35.978 "num_base_bdevs": 4, 00:14:35.978 "num_base_bdevs_discovered": 1, 00:14:35.978 "num_base_bdevs_operational": 4, 00:14:35.978 "base_bdevs_list": [ 00:14:35.978 { 00:14:35.978 "name": "BaseBdev1", 00:14:35.978 "uuid": "72e925eb-da90-460c-84d3-13d1af8049cb", 00:14:35.978 "is_configured": true, 00:14:35.978 "data_offset": 0, 00:14:35.978 "data_size": 65536 00:14:35.978 }, 00:14:35.978 { 00:14:35.978 "name": "BaseBdev2", 00:14:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.978 "is_configured": false, 00:14:35.978 "data_offset": 0, 00:14:35.978 "data_size": 0 00:14:35.978 }, 00:14:35.978 { 00:14:35.978 "name": "BaseBdev3", 00:14:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.978 "is_configured": false, 00:14:35.978 "data_offset": 0, 00:14:35.978 "data_size": 0 00:14:35.978 }, 00:14:35.978 { 00:14:35.978 "name": "BaseBdev4", 00:14:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.978 "is_configured": false, 00:14:35.978 "data_offset": 0, 00:14:35.978 "data_size": 0 00:14:35.978 } 00:14:35.978 ] 00:14:35.978 }' 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.978 01:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.294 [2024-10-09 01:34:35.140950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.294 BaseBdev2 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.294 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.565 [ 00:14:36.565 { 00:14:36.565 "name": "BaseBdev2", 00:14:36.565 "aliases": [ 00:14:36.565 "38b49ba6-5045-4a5a-a78e-b495a5a3d5c6" 00:14:36.565 ], 00:14:36.565 "product_name": "Malloc disk", 00:14:36.565 "block_size": 512, 00:14:36.565 "num_blocks": 65536, 00:14:36.565 "uuid": "38b49ba6-5045-4a5a-a78e-b495a5a3d5c6", 00:14:36.565 "assigned_rate_limits": { 00:14:36.565 "rw_ios_per_sec": 0, 00:14:36.565 "rw_mbytes_per_sec": 0, 00:14:36.565 "r_mbytes_per_sec": 0, 00:14:36.565 "w_mbytes_per_sec": 0 00:14:36.565 }, 00:14:36.565 "claimed": true, 00:14:36.565 "claim_type": "exclusive_write", 00:14:36.565 "zoned": false, 00:14:36.565 "supported_io_types": { 00:14:36.565 "read": true, 00:14:36.565 "write": true, 00:14:36.565 "unmap": true, 00:14:36.565 "flush": true, 00:14:36.565 "reset": true, 00:14:36.565 "nvme_admin": false, 00:14:36.565 "nvme_io": false, 00:14:36.565 "nvme_io_md": false, 00:14:36.565 "write_zeroes": true, 00:14:36.565 "zcopy": true, 00:14:36.565 "get_zone_info": false, 00:14:36.565 "zone_management": false, 00:14:36.565 "zone_append": false, 00:14:36.565 "compare": false, 00:14:36.565 "compare_and_write": false, 00:14:36.565 "abort": true, 00:14:36.565 "seek_hole": false, 00:14:36.565 "seek_data": false, 00:14:36.565 "copy": true, 00:14:36.565 "nvme_iov_md": false 00:14:36.565 }, 00:14:36.565 "memory_domains": [ 00:14:36.565 { 00:14:36.565 "dma_device_id": "system", 00:14:36.565 "dma_device_type": 1 00:14:36.565 }, 00:14:36.565 { 00:14:36.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.565 "dma_device_type": 2 00:14:36.565 } 00:14:36.565 ], 00:14:36.565 "driver_specific": {} 00:14:36.565 } 00:14:36.565 ] 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.565 "name": "Existed_Raid", 00:14:36.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.565 "strip_size_kb": 64, 00:14:36.565 "state": "configuring", 00:14:36.565 "raid_level": "raid5f", 00:14:36.565 "superblock": false, 00:14:36.565 "num_base_bdevs": 4, 00:14:36.565 "num_base_bdevs_discovered": 2, 00:14:36.565 "num_base_bdevs_operational": 4, 00:14:36.565 "base_bdevs_list": [ 00:14:36.565 { 00:14:36.565 "name": "BaseBdev1", 00:14:36.565 "uuid": "72e925eb-da90-460c-84d3-13d1af8049cb", 00:14:36.565 "is_configured": true, 00:14:36.565 "data_offset": 0, 00:14:36.565 "data_size": 65536 00:14:36.565 }, 00:14:36.565 { 00:14:36.565 "name": "BaseBdev2", 00:14:36.565 "uuid": "38b49ba6-5045-4a5a-a78e-b495a5a3d5c6", 00:14:36.565 "is_configured": true, 00:14:36.565 "data_offset": 0, 00:14:36.565 "data_size": 65536 00:14:36.565 }, 00:14:36.565 { 00:14:36.565 "name": "BaseBdev3", 00:14:36.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.565 "is_configured": false, 00:14:36.565 "data_offset": 0, 00:14:36.565 "data_size": 0 00:14:36.565 }, 00:14:36.565 { 00:14:36.565 "name": "BaseBdev4", 00:14:36.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.565 "is_configured": false, 00:14:36.565 "data_offset": 0, 00:14:36.565 "data_size": 0 00:14:36.565 } 00:14:36.565 ] 00:14:36.565 }' 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.565 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.825 [2024-10-09 01:34:35.645755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.825 BaseBdev3 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.825 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.825 [ 00:14:36.825 { 00:14:36.825 "name": "BaseBdev3", 00:14:36.825 "aliases": [ 00:14:36.825 "05c82d37-9e12-44d6-9efa-ccca564b29af" 00:14:36.825 ], 00:14:36.825 "product_name": "Malloc disk", 00:14:36.825 "block_size": 512, 00:14:36.825 "num_blocks": 65536, 00:14:36.825 "uuid": "05c82d37-9e12-44d6-9efa-ccca564b29af", 00:14:36.825 "assigned_rate_limits": { 00:14:36.825 "rw_ios_per_sec": 0, 00:14:36.825 "rw_mbytes_per_sec": 0, 00:14:36.825 "r_mbytes_per_sec": 0, 00:14:36.825 "w_mbytes_per_sec": 0 00:14:36.825 }, 00:14:36.825 "claimed": true, 00:14:36.825 "claim_type": "exclusive_write", 00:14:36.825 "zoned": false, 00:14:36.825 "supported_io_types": { 00:14:36.825 "read": true, 00:14:36.825 "write": true, 00:14:36.825 "unmap": true, 00:14:36.825 "flush": true, 00:14:36.825 "reset": true, 00:14:36.825 "nvme_admin": false, 00:14:36.825 "nvme_io": false, 00:14:36.825 "nvme_io_md": false, 00:14:36.825 "write_zeroes": true, 00:14:36.825 "zcopy": true, 00:14:36.825 "get_zone_info": false, 00:14:36.825 "zone_management": false, 00:14:36.825 "zone_append": false, 00:14:36.825 "compare": false, 00:14:36.825 "compare_and_write": false, 00:14:36.825 "abort": true, 00:14:36.825 "seek_hole": false, 00:14:36.825 "seek_data": false, 00:14:36.825 "copy": true, 00:14:36.826 "nvme_iov_md": false 00:14:36.826 }, 00:14:36.826 "memory_domains": [ 00:14:36.826 { 00:14:36.826 "dma_device_id": "system", 00:14:36.826 "dma_device_type": 1 00:14:36.826 }, 00:14:36.826 { 00:14:36.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.826 "dma_device_type": 2 00:14:36.826 } 00:14:36.826 ], 00:14:36.826 "driver_specific": {} 00:14:36.826 } 00:14:36.826 ] 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.826 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.086 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.086 "name": "Existed_Raid", 00:14:37.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.086 "strip_size_kb": 64, 00:14:37.086 "state": "configuring", 00:14:37.086 "raid_level": "raid5f", 00:14:37.086 "superblock": false, 00:14:37.086 "num_base_bdevs": 4, 00:14:37.086 "num_base_bdevs_discovered": 3, 00:14:37.086 "num_base_bdevs_operational": 4, 00:14:37.086 "base_bdevs_list": [ 00:14:37.086 { 00:14:37.086 "name": "BaseBdev1", 00:14:37.086 "uuid": "72e925eb-da90-460c-84d3-13d1af8049cb", 00:14:37.086 "is_configured": true, 00:14:37.086 "data_offset": 0, 00:14:37.086 "data_size": 65536 00:14:37.086 }, 00:14:37.086 { 00:14:37.086 "name": "BaseBdev2", 00:14:37.086 "uuid": "38b49ba6-5045-4a5a-a78e-b495a5a3d5c6", 00:14:37.086 "is_configured": true, 00:14:37.086 "data_offset": 0, 00:14:37.086 "data_size": 65536 00:14:37.086 }, 00:14:37.086 { 00:14:37.086 "name": "BaseBdev3", 00:14:37.086 "uuid": "05c82d37-9e12-44d6-9efa-ccca564b29af", 00:14:37.086 "is_configured": true, 00:14:37.086 "data_offset": 0, 00:14:37.086 "data_size": 65536 00:14:37.086 }, 00:14:37.086 { 00:14:37.086 "name": "BaseBdev4", 00:14:37.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.086 "is_configured": false, 00:14:37.086 "data_offset": 0, 00:14:37.086 "data_size": 0 00:14:37.086 } 00:14:37.086 ] 00:14:37.086 }' 00:14:37.086 01:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.086 01:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 [2024-10-09 01:34:36.102551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.346 [2024-10-09 01:34:36.102684] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:37.346 [2024-10-09 01:34:36.102705] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:37.346 [2024-10-09 01:34:36.103062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.346 [2024-10-09 01:34:36.103554] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:37.346 [2024-10-09 01:34:36.103568] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:37.346 [2024-10-09 01:34:36.103851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.346 BaseBdev4 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 [ 00:14:37.346 { 00:14:37.346 "name": "BaseBdev4", 00:14:37.346 "aliases": [ 00:14:37.346 "6795a383-329a-477d-99a1-e23eaf344c39" 00:14:37.346 ], 00:14:37.346 "product_name": "Malloc disk", 00:14:37.346 "block_size": 512, 00:14:37.346 "num_blocks": 65536, 00:14:37.346 "uuid": "6795a383-329a-477d-99a1-e23eaf344c39", 00:14:37.346 "assigned_rate_limits": { 00:14:37.346 "rw_ios_per_sec": 0, 00:14:37.346 "rw_mbytes_per_sec": 0, 00:14:37.346 "r_mbytes_per_sec": 0, 00:14:37.346 "w_mbytes_per_sec": 0 00:14:37.346 }, 00:14:37.346 "claimed": true, 00:14:37.346 "claim_type": "exclusive_write", 00:14:37.346 "zoned": false, 00:14:37.346 "supported_io_types": { 00:14:37.346 "read": true, 00:14:37.346 "write": true, 00:14:37.346 "unmap": true, 00:14:37.346 "flush": true, 00:14:37.346 "reset": true, 00:14:37.346 "nvme_admin": false, 00:14:37.346 "nvme_io": false, 00:14:37.346 "nvme_io_md": false, 00:14:37.346 "write_zeroes": true, 00:14:37.346 "zcopy": true, 00:14:37.346 "get_zone_info": false, 00:14:37.346 "zone_management": false, 00:14:37.346 "zone_append": false, 00:14:37.346 "compare": false, 00:14:37.346 "compare_and_write": false, 00:14:37.346 "abort": true, 00:14:37.346 "seek_hole": false, 00:14:37.346 "seek_data": false, 00:14:37.346 "copy": true, 00:14:37.346 "nvme_iov_md": false 00:14:37.346 }, 00:14:37.346 "memory_domains": [ 00:14:37.346 { 00:14:37.346 "dma_device_id": "system", 00:14:37.346 "dma_device_type": 1 00:14:37.346 }, 00:14:37.346 { 00:14:37.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.346 "dma_device_type": 2 00:14:37.346 } 00:14:37.346 ], 00:14:37.346 "driver_specific": {} 00:14:37.346 } 00:14:37.346 ] 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.346 "name": "Existed_Raid", 00:14:37.346 "uuid": "e2999b98-0d82-4b45-b765-54b2d23b386d", 00:14:37.346 "strip_size_kb": 64, 00:14:37.346 "state": "online", 00:14:37.346 "raid_level": "raid5f", 00:14:37.346 "superblock": false, 00:14:37.346 "num_base_bdevs": 4, 00:14:37.346 "num_base_bdevs_discovered": 4, 00:14:37.346 "num_base_bdevs_operational": 4, 00:14:37.346 "base_bdevs_list": [ 00:14:37.346 { 00:14:37.346 "name": "BaseBdev1", 00:14:37.346 "uuid": "72e925eb-da90-460c-84d3-13d1af8049cb", 00:14:37.346 "is_configured": true, 00:14:37.346 "data_offset": 0, 00:14:37.346 "data_size": 65536 00:14:37.346 }, 00:14:37.346 { 00:14:37.346 "name": "BaseBdev2", 00:14:37.346 "uuid": "38b49ba6-5045-4a5a-a78e-b495a5a3d5c6", 00:14:37.346 "is_configured": true, 00:14:37.346 "data_offset": 0, 00:14:37.346 "data_size": 65536 00:14:37.346 }, 00:14:37.346 { 00:14:37.346 "name": "BaseBdev3", 00:14:37.346 "uuid": "05c82d37-9e12-44d6-9efa-ccca564b29af", 00:14:37.346 "is_configured": true, 00:14:37.346 "data_offset": 0, 00:14:37.346 "data_size": 65536 00:14:37.346 }, 00:14:37.346 { 00:14:37.346 "name": "BaseBdev4", 00:14:37.346 "uuid": "6795a383-329a-477d-99a1-e23eaf344c39", 00:14:37.346 "is_configured": true, 00:14:37.346 "data_offset": 0, 00:14:37.346 "data_size": 65536 00:14:37.346 } 00:14:37.346 ] 00:14:37.346 }' 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.346 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 [2024-10-09 01:34:36.575062] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.916 "name": "Existed_Raid", 00:14:37.916 "aliases": [ 00:14:37.916 "e2999b98-0d82-4b45-b765-54b2d23b386d" 00:14:37.916 ], 00:14:37.916 "product_name": "Raid Volume", 00:14:37.916 "block_size": 512, 00:14:37.916 "num_blocks": 196608, 00:14:37.916 "uuid": "e2999b98-0d82-4b45-b765-54b2d23b386d", 00:14:37.916 "assigned_rate_limits": { 00:14:37.916 "rw_ios_per_sec": 0, 00:14:37.916 "rw_mbytes_per_sec": 0, 00:14:37.916 "r_mbytes_per_sec": 0, 00:14:37.916 "w_mbytes_per_sec": 0 00:14:37.916 }, 00:14:37.916 "claimed": false, 00:14:37.916 "zoned": false, 00:14:37.916 "supported_io_types": { 00:14:37.916 "read": true, 00:14:37.916 "write": true, 00:14:37.916 "unmap": false, 00:14:37.916 "flush": false, 00:14:37.916 "reset": true, 00:14:37.916 "nvme_admin": false, 00:14:37.916 "nvme_io": false, 00:14:37.916 "nvme_io_md": false, 00:14:37.916 "write_zeroes": true, 00:14:37.916 "zcopy": false, 00:14:37.916 "get_zone_info": false, 00:14:37.916 "zone_management": false, 00:14:37.916 "zone_append": false, 00:14:37.916 "compare": false, 00:14:37.916 "compare_and_write": false, 00:14:37.916 "abort": false, 00:14:37.916 "seek_hole": false, 00:14:37.916 "seek_data": false, 00:14:37.916 "copy": false, 00:14:37.916 "nvme_iov_md": false 00:14:37.916 }, 00:14:37.916 "driver_specific": { 00:14:37.916 "raid": { 00:14:37.916 "uuid": "e2999b98-0d82-4b45-b765-54b2d23b386d", 00:14:37.916 "strip_size_kb": 64, 00:14:37.916 "state": "online", 00:14:37.916 "raid_level": "raid5f", 00:14:37.916 "superblock": false, 00:14:37.916 "num_base_bdevs": 4, 00:14:37.916 "num_base_bdevs_discovered": 4, 00:14:37.916 "num_base_bdevs_operational": 4, 00:14:37.916 "base_bdevs_list": [ 00:14:37.916 { 00:14:37.916 "name": "BaseBdev1", 00:14:37.916 "uuid": "72e925eb-da90-460c-84d3-13d1af8049cb", 00:14:37.916 "is_configured": true, 00:14:37.916 "data_offset": 0, 00:14:37.916 "data_size": 65536 00:14:37.916 }, 00:14:37.916 { 00:14:37.916 "name": "BaseBdev2", 00:14:37.916 "uuid": "38b49ba6-5045-4a5a-a78e-b495a5a3d5c6", 00:14:37.916 "is_configured": true, 00:14:37.916 "data_offset": 0, 00:14:37.916 "data_size": 65536 00:14:37.916 }, 00:14:37.916 { 00:14:37.916 "name": "BaseBdev3", 00:14:37.916 "uuid": "05c82d37-9e12-44d6-9efa-ccca564b29af", 00:14:37.916 "is_configured": true, 00:14:37.916 "data_offset": 0, 00:14:37.916 "data_size": 65536 00:14:37.916 }, 00:14:37.916 { 00:14:37.916 "name": "BaseBdev4", 00:14:37.916 "uuid": "6795a383-329a-477d-99a1-e23eaf344c39", 00:14:37.916 "is_configured": true, 00:14:37.916 "data_offset": 0, 00:14:37.916 "data_size": 65536 00:14:37.916 } 00:14:37.916 ] 00:14:37.916 } 00:14:37.916 } 00:14:37.916 }' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:37.916 BaseBdev2 00:14:37.916 BaseBdev3 00:14:37.916 BaseBdev4' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.916 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.176 [2024-10-09 01:34:36.875008] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.176 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.177 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.177 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.177 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.177 "name": "Existed_Raid", 00:14:38.177 "uuid": "e2999b98-0d82-4b45-b765-54b2d23b386d", 00:14:38.177 "strip_size_kb": 64, 00:14:38.177 "state": "online", 00:14:38.177 "raid_level": "raid5f", 00:14:38.177 "superblock": false, 00:14:38.177 "num_base_bdevs": 4, 00:14:38.177 "num_base_bdevs_discovered": 3, 00:14:38.177 "num_base_bdevs_operational": 3, 00:14:38.177 "base_bdevs_list": [ 00:14:38.177 { 00:14:38.177 "name": null, 00:14:38.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.177 "is_configured": false, 00:14:38.177 "data_offset": 0, 00:14:38.177 "data_size": 65536 00:14:38.177 }, 00:14:38.177 { 00:14:38.177 "name": "BaseBdev2", 00:14:38.177 "uuid": "38b49ba6-5045-4a5a-a78e-b495a5a3d5c6", 00:14:38.177 "is_configured": true, 00:14:38.177 "data_offset": 0, 00:14:38.177 "data_size": 65536 00:14:38.177 }, 00:14:38.177 { 00:14:38.177 "name": "BaseBdev3", 00:14:38.177 "uuid": "05c82d37-9e12-44d6-9efa-ccca564b29af", 00:14:38.177 "is_configured": true, 00:14:38.177 "data_offset": 0, 00:14:38.177 "data_size": 65536 00:14:38.177 }, 00:14:38.177 { 00:14:38.177 "name": "BaseBdev4", 00:14:38.177 "uuid": "6795a383-329a-477d-99a1-e23eaf344c39", 00:14:38.177 "is_configured": true, 00:14:38.177 "data_offset": 0, 00:14:38.177 "data_size": 65536 00:14:38.177 } 00:14:38.177 ] 00:14:38.177 }' 00:14:38.177 01:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.177 01:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 [2024-10-09 01:34:37.408027] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:38.746 [2024-10-09 01:34:37.408140] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.746 [2024-10-09 01:34:37.428490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 [2024-10-09 01:34:37.488544] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 [2024-10-09 01:34:37.568557] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:38.746 [2024-10-09 01:34:37.568653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.746 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.007 BaseBdev2 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.007 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.007 [ 00:14:39.007 { 00:14:39.007 "name": "BaseBdev2", 00:14:39.007 "aliases": [ 00:14:39.007 "b83732bc-28d1-4461-82f8-1ffd13d46159" 00:14:39.007 ], 00:14:39.007 "product_name": "Malloc disk", 00:14:39.008 "block_size": 512, 00:14:39.008 "num_blocks": 65536, 00:14:39.008 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:39.008 "assigned_rate_limits": { 00:14:39.008 "rw_ios_per_sec": 0, 00:14:39.008 "rw_mbytes_per_sec": 0, 00:14:39.008 "r_mbytes_per_sec": 0, 00:14:39.008 "w_mbytes_per_sec": 0 00:14:39.008 }, 00:14:39.008 "claimed": false, 00:14:39.008 "zoned": false, 00:14:39.008 "supported_io_types": { 00:14:39.008 "read": true, 00:14:39.008 "write": true, 00:14:39.008 "unmap": true, 00:14:39.008 "flush": true, 00:14:39.008 "reset": true, 00:14:39.008 "nvme_admin": false, 00:14:39.008 "nvme_io": false, 00:14:39.008 "nvme_io_md": false, 00:14:39.008 "write_zeroes": true, 00:14:39.008 "zcopy": true, 00:14:39.008 "get_zone_info": false, 00:14:39.008 "zone_management": false, 00:14:39.008 "zone_append": false, 00:14:39.008 "compare": false, 00:14:39.008 "compare_and_write": false, 00:14:39.008 "abort": true, 00:14:39.008 "seek_hole": false, 00:14:39.008 "seek_data": false, 00:14:39.008 "copy": true, 00:14:39.008 "nvme_iov_md": false 00:14:39.008 }, 00:14:39.008 "memory_domains": [ 00:14:39.008 { 00:14:39.008 "dma_device_id": "system", 00:14:39.008 "dma_device_type": 1 00:14:39.008 }, 00:14:39.008 { 00:14:39.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.008 "dma_device_type": 2 00:14:39.008 } 00:14:39.008 ], 00:14:39.008 "driver_specific": {} 00:14:39.008 } 00:14:39.008 ] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.008 BaseBdev3 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.008 [ 00:14:39.008 { 00:14:39.008 "name": "BaseBdev3", 00:14:39.008 "aliases": [ 00:14:39.008 "8b894ba1-9aed-404e-9e04-46bd7e1805e6" 00:14:39.008 ], 00:14:39.008 "product_name": "Malloc disk", 00:14:39.008 "block_size": 512, 00:14:39.008 "num_blocks": 65536, 00:14:39.008 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:39.008 "assigned_rate_limits": { 00:14:39.008 "rw_ios_per_sec": 0, 00:14:39.008 "rw_mbytes_per_sec": 0, 00:14:39.008 "r_mbytes_per_sec": 0, 00:14:39.008 "w_mbytes_per_sec": 0 00:14:39.008 }, 00:14:39.008 "claimed": false, 00:14:39.008 "zoned": false, 00:14:39.008 "supported_io_types": { 00:14:39.008 "read": true, 00:14:39.008 "write": true, 00:14:39.008 "unmap": true, 00:14:39.008 "flush": true, 00:14:39.008 "reset": true, 00:14:39.008 "nvme_admin": false, 00:14:39.008 "nvme_io": false, 00:14:39.008 "nvme_io_md": false, 00:14:39.008 "write_zeroes": true, 00:14:39.008 "zcopy": true, 00:14:39.008 "get_zone_info": false, 00:14:39.008 "zone_management": false, 00:14:39.008 "zone_append": false, 00:14:39.008 "compare": false, 00:14:39.008 "compare_and_write": false, 00:14:39.008 "abort": true, 00:14:39.008 "seek_hole": false, 00:14:39.008 "seek_data": false, 00:14:39.008 "copy": true, 00:14:39.008 "nvme_iov_md": false 00:14:39.008 }, 00:14:39.008 "memory_domains": [ 00:14:39.008 { 00:14:39.008 "dma_device_id": "system", 00:14:39.008 "dma_device_type": 1 00:14:39.008 }, 00:14:39.008 { 00:14:39.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.008 "dma_device_type": 2 00:14:39.008 } 00:14:39.008 ], 00:14:39.008 "driver_specific": {} 00:14:39.008 } 00:14:39.008 ] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.008 BaseBdev4 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.008 [ 00:14:39.008 { 00:14:39.008 "name": "BaseBdev4", 00:14:39.008 "aliases": [ 00:14:39.008 "a0849563-6ccd-42c4-9d8c-f437459b9f79" 00:14:39.008 ], 00:14:39.008 "product_name": "Malloc disk", 00:14:39.008 "block_size": 512, 00:14:39.008 "num_blocks": 65536, 00:14:39.008 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:39.008 "assigned_rate_limits": { 00:14:39.008 "rw_ios_per_sec": 0, 00:14:39.008 "rw_mbytes_per_sec": 0, 00:14:39.008 "r_mbytes_per_sec": 0, 00:14:39.008 "w_mbytes_per_sec": 0 00:14:39.008 }, 00:14:39.008 "claimed": false, 00:14:39.008 "zoned": false, 00:14:39.008 "supported_io_types": { 00:14:39.008 "read": true, 00:14:39.008 "write": true, 00:14:39.008 "unmap": true, 00:14:39.008 "flush": true, 00:14:39.008 "reset": true, 00:14:39.008 "nvme_admin": false, 00:14:39.008 "nvme_io": false, 00:14:39.008 "nvme_io_md": false, 00:14:39.008 "write_zeroes": true, 00:14:39.008 "zcopy": true, 00:14:39.008 "get_zone_info": false, 00:14:39.008 "zone_management": false, 00:14:39.008 "zone_append": false, 00:14:39.008 "compare": false, 00:14:39.008 "compare_and_write": false, 00:14:39.008 "abort": true, 00:14:39.008 "seek_hole": false, 00:14:39.008 "seek_data": false, 00:14:39.008 "copy": true, 00:14:39.008 "nvme_iov_md": false 00:14:39.008 }, 00:14:39.008 "memory_domains": [ 00:14:39.008 { 00:14:39.008 "dma_device_id": "system", 00:14:39.008 "dma_device_type": 1 00:14:39.008 }, 00:14:39.008 { 00:14:39.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.008 "dma_device_type": 2 00:14:39.008 } 00:14:39.008 ], 00:14:39.008 "driver_specific": {} 00:14:39.008 } 00:14:39.008 ] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.008 [2024-10-09 01:34:37.829142] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:39.008 [2024-10-09 01:34:37.829271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:39.008 [2024-10-09 01:34:37.829311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.008 [2024-10-09 01:34:37.831438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.008 [2024-10-09 01:34:37.831535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:39.008 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.009 "name": "Existed_Raid", 00:14:39.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.009 "strip_size_kb": 64, 00:14:39.009 "state": "configuring", 00:14:39.009 "raid_level": "raid5f", 00:14:39.009 "superblock": false, 00:14:39.009 "num_base_bdevs": 4, 00:14:39.009 "num_base_bdevs_discovered": 3, 00:14:39.009 "num_base_bdevs_operational": 4, 00:14:39.009 "base_bdevs_list": [ 00:14:39.009 { 00:14:39.009 "name": "BaseBdev1", 00:14:39.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.009 "is_configured": false, 00:14:39.009 "data_offset": 0, 00:14:39.009 "data_size": 0 00:14:39.009 }, 00:14:39.009 { 00:14:39.009 "name": "BaseBdev2", 00:14:39.009 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:39.009 "is_configured": true, 00:14:39.009 "data_offset": 0, 00:14:39.009 "data_size": 65536 00:14:39.009 }, 00:14:39.009 { 00:14:39.009 "name": "BaseBdev3", 00:14:39.009 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:39.009 "is_configured": true, 00:14:39.009 "data_offset": 0, 00:14:39.009 "data_size": 65536 00:14:39.009 }, 00:14:39.009 { 00:14:39.009 "name": "BaseBdev4", 00:14:39.009 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:39.009 "is_configured": true, 00:14:39.009 "data_offset": 0, 00:14:39.009 "data_size": 65536 00:14:39.009 } 00:14:39.009 ] 00:14:39.009 }' 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.009 01:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.578 [2024-10-09 01:34:38.301248] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.578 "name": "Existed_Raid", 00:14:39.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.578 "strip_size_kb": 64, 00:14:39.578 "state": "configuring", 00:14:39.578 "raid_level": "raid5f", 00:14:39.578 "superblock": false, 00:14:39.578 "num_base_bdevs": 4, 00:14:39.578 "num_base_bdevs_discovered": 2, 00:14:39.578 "num_base_bdevs_operational": 4, 00:14:39.578 "base_bdevs_list": [ 00:14:39.578 { 00:14:39.578 "name": "BaseBdev1", 00:14:39.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.578 "is_configured": false, 00:14:39.578 "data_offset": 0, 00:14:39.578 "data_size": 0 00:14:39.578 }, 00:14:39.578 { 00:14:39.578 "name": null, 00:14:39.578 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:39.578 "is_configured": false, 00:14:39.578 "data_offset": 0, 00:14:39.578 "data_size": 65536 00:14:39.578 }, 00:14:39.578 { 00:14:39.578 "name": "BaseBdev3", 00:14:39.578 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:39.578 "is_configured": true, 00:14:39.578 "data_offset": 0, 00:14:39.578 "data_size": 65536 00:14:39.578 }, 00:14:39.578 { 00:14:39.578 "name": "BaseBdev4", 00:14:39.578 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:39.578 "is_configured": true, 00:14:39.578 "data_offset": 0, 00:14:39.578 "data_size": 65536 00:14:39.578 } 00:14:39.578 ] 00:14:39.578 }' 00:14:39.578 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.579 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.148 [2024-10-09 01:34:38.858059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.148 BaseBdev1 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.148 [ 00:14:40.148 { 00:14:40.148 "name": "BaseBdev1", 00:14:40.148 "aliases": [ 00:14:40.148 "de506ddc-f826-41e0-8133-9ef0809c2ee8" 00:14:40.148 ], 00:14:40.148 "product_name": "Malloc disk", 00:14:40.148 "block_size": 512, 00:14:40.148 "num_blocks": 65536, 00:14:40.148 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:40.148 "assigned_rate_limits": { 00:14:40.148 "rw_ios_per_sec": 0, 00:14:40.148 "rw_mbytes_per_sec": 0, 00:14:40.148 "r_mbytes_per_sec": 0, 00:14:40.148 "w_mbytes_per_sec": 0 00:14:40.148 }, 00:14:40.148 "claimed": true, 00:14:40.148 "claim_type": "exclusive_write", 00:14:40.148 "zoned": false, 00:14:40.148 "supported_io_types": { 00:14:40.148 "read": true, 00:14:40.148 "write": true, 00:14:40.148 "unmap": true, 00:14:40.148 "flush": true, 00:14:40.148 "reset": true, 00:14:40.148 "nvme_admin": false, 00:14:40.148 "nvme_io": false, 00:14:40.148 "nvme_io_md": false, 00:14:40.148 "write_zeroes": true, 00:14:40.148 "zcopy": true, 00:14:40.148 "get_zone_info": false, 00:14:40.148 "zone_management": false, 00:14:40.148 "zone_append": false, 00:14:40.148 "compare": false, 00:14:40.148 "compare_and_write": false, 00:14:40.148 "abort": true, 00:14:40.148 "seek_hole": false, 00:14:40.148 "seek_data": false, 00:14:40.148 "copy": true, 00:14:40.148 "nvme_iov_md": false 00:14:40.148 }, 00:14:40.148 "memory_domains": [ 00:14:40.148 { 00:14:40.148 "dma_device_id": "system", 00:14:40.148 "dma_device_type": 1 00:14:40.148 }, 00:14:40.148 { 00:14:40.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.148 "dma_device_type": 2 00:14:40.148 } 00:14:40.148 ], 00:14:40.148 "driver_specific": {} 00:14:40.148 } 00:14:40.148 ] 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.148 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.149 "name": "Existed_Raid", 00:14:40.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.149 "strip_size_kb": 64, 00:14:40.149 "state": "configuring", 00:14:40.149 "raid_level": "raid5f", 00:14:40.149 "superblock": false, 00:14:40.149 "num_base_bdevs": 4, 00:14:40.149 "num_base_bdevs_discovered": 3, 00:14:40.149 "num_base_bdevs_operational": 4, 00:14:40.149 "base_bdevs_list": [ 00:14:40.149 { 00:14:40.149 "name": "BaseBdev1", 00:14:40.149 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 0, 00:14:40.149 "data_size": 65536 00:14:40.149 }, 00:14:40.149 { 00:14:40.149 "name": null, 00:14:40.149 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:40.149 "is_configured": false, 00:14:40.149 "data_offset": 0, 00:14:40.149 "data_size": 65536 00:14:40.149 }, 00:14:40.149 { 00:14:40.149 "name": "BaseBdev3", 00:14:40.149 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 0, 00:14:40.149 "data_size": 65536 00:14:40.149 }, 00:14:40.149 { 00:14:40.149 "name": "BaseBdev4", 00:14:40.149 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:40.149 "is_configured": true, 00:14:40.149 "data_offset": 0, 00:14:40.149 "data_size": 65536 00:14:40.149 } 00:14:40.149 ] 00:14:40.149 }' 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.149 01:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.718 [2024-10-09 01:34:39.358255] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.718 "name": "Existed_Raid", 00:14:40.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.718 "strip_size_kb": 64, 00:14:40.718 "state": "configuring", 00:14:40.718 "raid_level": "raid5f", 00:14:40.718 "superblock": false, 00:14:40.718 "num_base_bdevs": 4, 00:14:40.718 "num_base_bdevs_discovered": 2, 00:14:40.718 "num_base_bdevs_operational": 4, 00:14:40.718 "base_bdevs_list": [ 00:14:40.718 { 00:14:40.718 "name": "BaseBdev1", 00:14:40.718 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:40.718 "is_configured": true, 00:14:40.718 "data_offset": 0, 00:14:40.718 "data_size": 65536 00:14:40.718 }, 00:14:40.718 { 00:14:40.718 "name": null, 00:14:40.718 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:40.718 "is_configured": false, 00:14:40.718 "data_offset": 0, 00:14:40.718 "data_size": 65536 00:14:40.718 }, 00:14:40.718 { 00:14:40.718 "name": null, 00:14:40.718 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:40.718 "is_configured": false, 00:14:40.718 "data_offset": 0, 00:14:40.718 "data_size": 65536 00:14:40.718 }, 00:14:40.718 { 00:14:40.718 "name": "BaseBdev4", 00:14:40.718 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:40.718 "is_configured": true, 00:14:40.718 "data_offset": 0, 00:14:40.718 "data_size": 65536 00:14:40.718 } 00:14:40.718 ] 00:14:40.718 }' 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.718 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.978 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.978 [2024-10-09 01:34:39.866403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.238 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.238 "name": "Existed_Raid", 00:14:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.238 "strip_size_kb": 64, 00:14:41.238 "state": "configuring", 00:14:41.238 "raid_level": "raid5f", 00:14:41.238 "superblock": false, 00:14:41.238 "num_base_bdevs": 4, 00:14:41.238 "num_base_bdevs_discovered": 3, 00:14:41.238 "num_base_bdevs_operational": 4, 00:14:41.238 "base_bdevs_list": [ 00:14:41.238 { 00:14:41.238 "name": "BaseBdev1", 00:14:41.238 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:41.238 "is_configured": true, 00:14:41.238 "data_offset": 0, 00:14:41.238 "data_size": 65536 00:14:41.238 }, 00:14:41.238 { 00:14:41.238 "name": null, 00:14:41.238 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:41.238 "is_configured": false, 00:14:41.238 "data_offset": 0, 00:14:41.238 "data_size": 65536 00:14:41.238 }, 00:14:41.239 { 00:14:41.239 "name": "BaseBdev3", 00:14:41.239 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:41.239 "is_configured": true, 00:14:41.239 "data_offset": 0, 00:14:41.239 "data_size": 65536 00:14:41.239 }, 00:14:41.239 { 00:14:41.239 "name": "BaseBdev4", 00:14:41.239 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:41.239 "is_configured": true, 00:14:41.239 "data_offset": 0, 00:14:41.239 "data_size": 65536 00:14:41.239 } 00:14:41.239 ] 00:14:41.239 }' 00:14:41.239 01:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.239 01:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.499 [2024-10-09 01:34:40.342561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.499 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.758 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.758 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.758 "name": "Existed_Raid", 00:14:41.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.758 "strip_size_kb": 64, 00:14:41.758 "state": "configuring", 00:14:41.758 "raid_level": "raid5f", 00:14:41.758 "superblock": false, 00:14:41.758 "num_base_bdevs": 4, 00:14:41.758 "num_base_bdevs_discovered": 2, 00:14:41.758 "num_base_bdevs_operational": 4, 00:14:41.758 "base_bdevs_list": [ 00:14:41.758 { 00:14:41.758 "name": null, 00:14:41.758 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:41.758 "is_configured": false, 00:14:41.758 "data_offset": 0, 00:14:41.758 "data_size": 65536 00:14:41.758 }, 00:14:41.758 { 00:14:41.758 "name": null, 00:14:41.758 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:41.758 "is_configured": false, 00:14:41.758 "data_offset": 0, 00:14:41.758 "data_size": 65536 00:14:41.758 }, 00:14:41.758 { 00:14:41.758 "name": "BaseBdev3", 00:14:41.758 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:41.758 "is_configured": true, 00:14:41.758 "data_offset": 0, 00:14:41.758 "data_size": 65536 00:14:41.758 }, 00:14:41.758 { 00:14:41.758 "name": "BaseBdev4", 00:14:41.758 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:41.758 "is_configured": true, 00:14:41.758 "data_offset": 0, 00:14:41.758 "data_size": 65536 00:14:41.758 } 00:14:41.758 ] 00:14:41.758 }' 00:14:41.758 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.758 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.018 [2024-10-09 01:34:40.858590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.018 "name": "Existed_Raid", 00:14:42.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.018 "strip_size_kb": 64, 00:14:42.018 "state": "configuring", 00:14:42.018 "raid_level": "raid5f", 00:14:42.018 "superblock": false, 00:14:42.018 "num_base_bdevs": 4, 00:14:42.018 "num_base_bdevs_discovered": 3, 00:14:42.018 "num_base_bdevs_operational": 4, 00:14:42.018 "base_bdevs_list": [ 00:14:42.018 { 00:14:42.018 "name": null, 00:14:42.018 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:42.018 "is_configured": false, 00:14:42.018 "data_offset": 0, 00:14:42.018 "data_size": 65536 00:14:42.018 }, 00:14:42.018 { 00:14:42.018 "name": "BaseBdev2", 00:14:42.018 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:42.018 "is_configured": true, 00:14:42.018 "data_offset": 0, 00:14:42.018 "data_size": 65536 00:14:42.018 }, 00:14:42.018 { 00:14:42.018 "name": "BaseBdev3", 00:14:42.018 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:42.018 "is_configured": true, 00:14:42.018 "data_offset": 0, 00:14:42.018 "data_size": 65536 00:14:42.018 }, 00:14:42.018 { 00:14:42.018 "name": "BaseBdev4", 00:14:42.018 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:42.018 "is_configured": true, 00:14:42.018 "data_offset": 0, 00:14:42.018 "data_size": 65536 00:14:42.018 } 00:14:42.018 ] 00:14:42.018 }' 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.018 01:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.587 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:42.587 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.587 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.587 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u de506ddc-f826-41e0-8133-9ef0809c2ee8 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.588 [2024-10-09 01:34:41.400858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:42.588 [2024-10-09 01:34:41.400978] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:42.588 [2024-10-09 01:34:41.400995] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:42.588 [2024-10-09 01:34:41.401279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:42.588 [2024-10-09 01:34:41.401812] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:42.588 [2024-10-09 01:34:41.401826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:42.588 [2024-10-09 01:34:41.402053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.588 NewBaseBdev 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.588 [ 00:14:42.588 { 00:14:42.588 "name": "NewBaseBdev", 00:14:42.588 "aliases": [ 00:14:42.588 "de506ddc-f826-41e0-8133-9ef0809c2ee8" 00:14:42.588 ], 00:14:42.588 "product_name": "Malloc disk", 00:14:42.588 "block_size": 512, 00:14:42.588 "num_blocks": 65536, 00:14:42.588 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:42.588 "assigned_rate_limits": { 00:14:42.588 "rw_ios_per_sec": 0, 00:14:42.588 "rw_mbytes_per_sec": 0, 00:14:42.588 "r_mbytes_per_sec": 0, 00:14:42.588 "w_mbytes_per_sec": 0 00:14:42.588 }, 00:14:42.588 "claimed": true, 00:14:42.588 "claim_type": "exclusive_write", 00:14:42.588 "zoned": false, 00:14:42.588 "supported_io_types": { 00:14:42.588 "read": true, 00:14:42.588 "write": true, 00:14:42.588 "unmap": true, 00:14:42.588 "flush": true, 00:14:42.588 "reset": true, 00:14:42.588 "nvme_admin": false, 00:14:42.588 "nvme_io": false, 00:14:42.588 "nvme_io_md": false, 00:14:42.588 "write_zeroes": true, 00:14:42.588 "zcopy": true, 00:14:42.588 "get_zone_info": false, 00:14:42.588 "zone_management": false, 00:14:42.588 "zone_append": false, 00:14:42.588 "compare": false, 00:14:42.588 "compare_and_write": false, 00:14:42.588 "abort": true, 00:14:42.588 "seek_hole": false, 00:14:42.588 "seek_data": false, 00:14:42.588 "copy": true, 00:14:42.588 "nvme_iov_md": false 00:14:42.588 }, 00:14:42.588 "memory_domains": [ 00:14:42.588 { 00:14:42.588 "dma_device_id": "system", 00:14:42.588 "dma_device_type": 1 00:14:42.588 }, 00:14:42.588 { 00:14:42.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.588 "dma_device_type": 2 00:14:42.588 } 00:14:42.588 ], 00:14:42.588 "driver_specific": {} 00:14:42.588 } 00:14:42.588 ] 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.588 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.848 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.848 "name": "Existed_Raid", 00:14:42.848 "uuid": "fd103042-d176-4158-8c36-1e8e9da9e533", 00:14:42.848 "strip_size_kb": 64, 00:14:42.848 "state": "online", 00:14:42.848 "raid_level": "raid5f", 00:14:42.848 "superblock": false, 00:14:42.848 "num_base_bdevs": 4, 00:14:42.848 "num_base_bdevs_discovered": 4, 00:14:42.848 "num_base_bdevs_operational": 4, 00:14:42.848 "base_bdevs_list": [ 00:14:42.848 { 00:14:42.848 "name": "NewBaseBdev", 00:14:42.848 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:42.848 "is_configured": true, 00:14:42.848 "data_offset": 0, 00:14:42.848 "data_size": 65536 00:14:42.848 }, 00:14:42.848 { 00:14:42.848 "name": "BaseBdev2", 00:14:42.848 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:42.848 "is_configured": true, 00:14:42.848 "data_offset": 0, 00:14:42.848 "data_size": 65536 00:14:42.848 }, 00:14:42.848 { 00:14:42.848 "name": "BaseBdev3", 00:14:42.848 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:42.848 "is_configured": true, 00:14:42.848 "data_offset": 0, 00:14:42.848 "data_size": 65536 00:14:42.848 }, 00:14:42.848 { 00:14:42.848 "name": "BaseBdev4", 00:14:42.848 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:42.848 "is_configured": true, 00:14:42.848 "data_offset": 0, 00:14:42.848 "data_size": 65536 00:14:42.848 } 00:14:42.848 ] 00:14:42.848 }' 00:14:42.848 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.848 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.108 [2024-10-09 01:34:41.869220] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:43.108 "name": "Existed_Raid", 00:14:43.108 "aliases": [ 00:14:43.108 "fd103042-d176-4158-8c36-1e8e9da9e533" 00:14:43.108 ], 00:14:43.108 "product_name": "Raid Volume", 00:14:43.108 "block_size": 512, 00:14:43.108 "num_blocks": 196608, 00:14:43.108 "uuid": "fd103042-d176-4158-8c36-1e8e9da9e533", 00:14:43.108 "assigned_rate_limits": { 00:14:43.108 "rw_ios_per_sec": 0, 00:14:43.108 "rw_mbytes_per_sec": 0, 00:14:43.108 "r_mbytes_per_sec": 0, 00:14:43.108 "w_mbytes_per_sec": 0 00:14:43.108 }, 00:14:43.108 "claimed": false, 00:14:43.108 "zoned": false, 00:14:43.108 "supported_io_types": { 00:14:43.108 "read": true, 00:14:43.108 "write": true, 00:14:43.108 "unmap": false, 00:14:43.108 "flush": false, 00:14:43.108 "reset": true, 00:14:43.108 "nvme_admin": false, 00:14:43.108 "nvme_io": false, 00:14:43.108 "nvme_io_md": false, 00:14:43.108 "write_zeroes": true, 00:14:43.108 "zcopy": false, 00:14:43.108 "get_zone_info": false, 00:14:43.108 "zone_management": false, 00:14:43.108 "zone_append": false, 00:14:43.108 "compare": false, 00:14:43.108 "compare_and_write": false, 00:14:43.108 "abort": false, 00:14:43.108 "seek_hole": false, 00:14:43.108 "seek_data": false, 00:14:43.108 "copy": false, 00:14:43.108 "nvme_iov_md": false 00:14:43.108 }, 00:14:43.108 "driver_specific": { 00:14:43.108 "raid": { 00:14:43.108 "uuid": "fd103042-d176-4158-8c36-1e8e9da9e533", 00:14:43.108 "strip_size_kb": 64, 00:14:43.108 "state": "online", 00:14:43.108 "raid_level": "raid5f", 00:14:43.108 "superblock": false, 00:14:43.108 "num_base_bdevs": 4, 00:14:43.108 "num_base_bdevs_discovered": 4, 00:14:43.108 "num_base_bdevs_operational": 4, 00:14:43.108 "base_bdevs_list": [ 00:14:43.108 { 00:14:43.108 "name": "NewBaseBdev", 00:14:43.108 "uuid": "de506ddc-f826-41e0-8133-9ef0809c2ee8", 00:14:43.108 "is_configured": true, 00:14:43.108 "data_offset": 0, 00:14:43.108 "data_size": 65536 00:14:43.108 }, 00:14:43.108 { 00:14:43.108 "name": "BaseBdev2", 00:14:43.108 "uuid": "b83732bc-28d1-4461-82f8-1ffd13d46159", 00:14:43.108 "is_configured": true, 00:14:43.108 "data_offset": 0, 00:14:43.108 "data_size": 65536 00:14:43.108 }, 00:14:43.108 { 00:14:43.108 "name": "BaseBdev3", 00:14:43.108 "uuid": "8b894ba1-9aed-404e-9e04-46bd7e1805e6", 00:14:43.108 "is_configured": true, 00:14:43.108 "data_offset": 0, 00:14:43.108 "data_size": 65536 00:14:43.108 }, 00:14:43.108 { 00:14:43.108 "name": "BaseBdev4", 00:14:43.108 "uuid": "a0849563-6ccd-42c4-9d8c-f437459b9f79", 00:14:43.108 "is_configured": true, 00:14:43.108 "data_offset": 0, 00:14:43.108 "data_size": 65536 00:14:43.108 } 00:14:43.108 ] 00:14:43.108 } 00:14:43.108 } 00:14:43.108 }' 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:43.108 BaseBdev2 00:14:43.108 BaseBdev3 00:14:43.108 BaseBdev4' 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.108 01:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.368 [2024-10-09 01:34:42.165104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:43.368 [2024-10-09 01:34:42.165129] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.368 [2024-10-09 01:34:42.165207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.368 [2024-10-09 01:34:42.165483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.368 [2024-10-09 01:34:42.165498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 94399 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 94399 ']' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 94399 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94399 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:43.368 killing process with pid 94399 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94399' 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 94399 00:14:43.368 [2024-10-09 01:34:42.214695] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.368 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 94399 00:14:43.628 [2024-10-09 01:34:42.291339] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.889 ************************************ 00:14:43.889 END TEST raid5f_state_function_test 00:14:43.889 ************************************ 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:43.889 00:14:43.889 real 0m9.961s 00:14:43.889 user 0m16.680s 00:14:43.889 sys 0m2.203s 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.889 01:34:42 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:43.889 01:34:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:43.889 01:34:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:43.889 01:34:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.889 ************************************ 00:14:43.889 START TEST raid5f_state_function_test_sb 00:14:43.889 ************************************ 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=95054 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95054' 00:14:43.889 Process raid pid: 95054 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 95054 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95054 ']' 00:14:43.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.889 01:34:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.149 [2024-10-09 01:34:42.859659] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:44.149 [2024-10-09 01:34:42.859809] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.149 [2024-10-09 01:34:42.997961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:44.149 [2024-10-09 01:34:43.020229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.410 [2024-10-09 01:34:43.091297] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.410 [2024-10-09 01:34:43.166807] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.410 [2024-10-09 01:34:43.166847] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.979 [2024-10-09 01:34:43.671077] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.979 [2024-10-09 01:34:43.671139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.979 [2024-10-09 01:34:43.671151] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.979 [2024-10-09 01:34:43.671158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.979 [2024-10-09 01:34:43.671169] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.979 [2024-10-09 01:34:43.671175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.979 [2024-10-09 01:34:43.671183] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:44.979 [2024-10-09 01:34:43.671190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.979 "name": "Existed_Raid", 00:14:44.979 "uuid": "c28d2994-3ffc-4607-9619-773b476cb868", 00:14:44.979 "strip_size_kb": 64, 00:14:44.979 "state": "configuring", 00:14:44.979 "raid_level": "raid5f", 00:14:44.979 "superblock": true, 00:14:44.979 "num_base_bdevs": 4, 00:14:44.979 "num_base_bdevs_discovered": 0, 00:14:44.979 "num_base_bdevs_operational": 4, 00:14:44.979 "base_bdevs_list": [ 00:14:44.979 { 00:14:44.979 "name": "BaseBdev1", 00:14:44.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.979 "is_configured": false, 00:14:44.979 "data_offset": 0, 00:14:44.979 "data_size": 0 00:14:44.979 }, 00:14:44.979 { 00:14:44.979 "name": "BaseBdev2", 00:14:44.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.979 "is_configured": false, 00:14:44.979 "data_offset": 0, 00:14:44.979 "data_size": 0 00:14:44.979 }, 00:14:44.979 { 00:14:44.979 "name": "BaseBdev3", 00:14:44.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.979 "is_configured": false, 00:14:44.979 "data_offset": 0, 00:14:44.979 "data_size": 0 00:14:44.979 }, 00:14:44.979 { 00:14:44.979 "name": "BaseBdev4", 00:14:44.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.979 "is_configured": false, 00:14:44.979 "data_offset": 0, 00:14:44.979 "data_size": 0 00:14:44.979 } 00:14:44.979 ] 00:14:44.979 }' 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.979 01:34:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.548 [2024-10-09 01:34:44.143076] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.548 [2024-10-09 01:34:44.143184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.548 [2024-10-09 01:34:44.155100] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.548 [2024-10-09 01:34:44.155171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.548 [2024-10-09 01:34:44.155198] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.548 [2024-10-09 01:34:44.155217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.548 [2024-10-09 01:34:44.155236] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.548 [2024-10-09 01:34:44.155252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.548 [2024-10-09 01:34:44.155271] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:45.548 [2024-10-09 01:34:44.155288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.548 [2024-10-09 01:34:44.181919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.548 BaseBdev1 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.548 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.549 [ 00:14:45.549 { 00:14:45.549 "name": "BaseBdev1", 00:14:45.549 "aliases": [ 00:14:45.549 "699fe6d5-f07d-466d-af93-feb639ad16e4" 00:14:45.549 ], 00:14:45.549 "product_name": "Malloc disk", 00:14:45.549 "block_size": 512, 00:14:45.549 "num_blocks": 65536, 00:14:45.549 "uuid": "699fe6d5-f07d-466d-af93-feb639ad16e4", 00:14:45.549 "assigned_rate_limits": { 00:14:45.549 "rw_ios_per_sec": 0, 00:14:45.549 "rw_mbytes_per_sec": 0, 00:14:45.549 "r_mbytes_per_sec": 0, 00:14:45.549 "w_mbytes_per_sec": 0 00:14:45.549 }, 00:14:45.549 "claimed": true, 00:14:45.549 "claim_type": "exclusive_write", 00:14:45.549 "zoned": false, 00:14:45.549 "supported_io_types": { 00:14:45.549 "read": true, 00:14:45.549 "write": true, 00:14:45.549 "unmap": true, 00:14:45.549 "flush": true, 00:14:45.549 "reset": true, 00:14:45.549 "nvme_admin": false, 00:14:45.549 "nvme_io": false, 00:14:45.549 "nvme_io_md": false, 00:14:45.549 "write_zeroes": true, 00:14:45.549 "zcopy": true, 00:14:45.549 "get_zone_info": false, 00:14:45.549 "zone_management": false, 00:14:45.549 "zone_append": false, 00:14:45.549 "compare": false, 00:14:45.549 "compare_and_write": false, 00:14:45.549 "abort": true, 00:14:45.549 "seek_hole": false, 00:14:45.549 "seek_data": false, 00:14:45.549 "copy": true, 00:14:45.549 "nvme_iov_md": false 00:14:45.549 }, 00:14:45.549 "memory_domains": [ 00:14:45.549 { 00:14:45.549 "dma_device_id": "system", 00:14:45.549 "dma_device_type": 1 00:14:45.549 }, 00:14:45.549 { 00:14:45.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.549 "dma_device_type": 2 00:14:45.549 } 00:14:45.549 ], 00:14:45.549 "driver_specific": {} 00:14:45.549 } 00:14:45.549 ] 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.549 "name": "Existed_Raid", 00:14:45.549 "uuid": "a73935a2-d5cc-47a7-bf13-f304e9a399cc", 00:14:45.549 "strip_size_kb": 64, 00:14:45.549 "state": "configuring", 00:14:45.549 "raid_level": "raid5f", 00:14:45.549 "superblock": true, 00:14:45.549 "num_base_bdevs": 4, 00:14:45.549 "num_base_bdevs_discovered": 1, 00:14:45.549 "num_base_bdevs_operational": 4, 00:14:45.549 "base_bdevs_list": [ 00:14:45.549 { 00:14:45.549 "name": "BaseBdev1", 00:14:45.549 "uuid": "699fe6d5-f07d-466d-af93-feb639ad16e4", 00:14:45.549 "is_configured": true, 00:14:45.549 "data_offset": 2048, 00:14:45.549 "data_size": 63488 00:14:45.549 }, 00:14:45.549 { 00:14:45.549 "name": "BaseBdev2", 00:14:45.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.549 "is_configured": false, 00:14:45.549 "data_offset": 0, 00:14:45.549 "data_size": 0 00:14:45.549 }, 00:14:45.549 { 00:14:45.549 "name": "BaseBdev3", 00:14:45.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.549 "is_configured": false, 00:14:45.549 "data_offset": 0, 00:14:45.549 "data_size": 0 00:14:45.549 }, 00:14:45.549 { 00:14:45.549 "name": "BaseBdev4", 00:14:45.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.549 "is_configured": false, 00:14:45.549 "data_offset": 0, 00:14:45.549 "data_size": 0 00:14:45.549 } 00:14:45.549 ] 00:14:45.549 }' 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.549 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.809 [2024-10-09 01:34:44.634078] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.809 [2024-10-09 01:34:44.634126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.809 [2024-10-09 01:34:44.646122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.809 [2024-10-09 01:34:44.648093] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.809 [2024-10-09 01:34:44.648131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.809 [2024-10-09 01:34:44.648141] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.809 [2024-10-09 01:34:44.648148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.809 [2024-10-09 01:34:44.648155] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:45.809 [2024-10-09 01:34:44.648161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.809 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.068 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.068 "name": "Existed_Raid", 00:14:46.068 "uuid": "22c46668-5e6e-447c-875b-7dc59e2afb6f", 00:14:46.068 "strip_size_kb": 64, 00:14:46.068 "state": "configuring", 00:14:46.068 "raid_level": "raid5f", 00:14:46.068 "superblock": true, 00:14:46.068 "num_base_bdevs": 4, 00:14:46.068 "num_base_bdevs_discovered": 1, 00:14:46.068 "num_base_bdevs_operational": 4, 00:14:46.068 "base_bdevs_list": [ 00:14:46.068 { 00:14:46.068 "name": "BaseBdev1", 00:14:46.069 "uuid": "699fe6d5-f07d-466d-af93-feb639ad16e4", 00:14:46.069 "is_configured": true, 00:14:46.069 "data_offset": 2048, 00:14:46.069 "data_size": 63488 00:14:46.069 }, 00:14:46.069 { 00:14:46.069 "name": "BaseBdev2", 00:14:46.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.069 "is_configured": false, 00:14:46.069 "data_offset": 0, 00:14:46.069 "data_size": 0 00:14:46.069 }, 00:14:46.069 { 00:14:46.069 "name": "BaseBdev3", 00:14:46.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.069 "is_configured": false, 00:14:46.069 "data_offset": 0, 00:14:46.069 "data_size": 0 00:14:46.069 }, 00:14:46.069 { 00:14:46.069 "name": "BaseBdev4", 00:14:46.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.069 "is_configured": false, 00:14:46.069 "data_offset": 0, 00:14:46.069 "data_size": 0 00:14:46.069 } 00:14:46.069 ] 00:14:46.069 }' 00:14:46.069 01:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.069 01:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.328 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.329 [2024-10-09 01:34:45.114922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.329 BaseBdev2 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.329 [ 00:14:46.329 { 00:14:46.329 "name": "BaseBdev2", 00:14:46.329 "aliases": [ 00:14:46.329 "b020215a-95a3-4109-b295-1e22318a2285" 00:14:46.329 ], 00:14:46.329 "product_name": "Malloc disk", 00:14:46.329 "block_size": 512, 00:14:46.329 "num_blocks": 65536, 00:14:46.329 "uuid": "b020215a-95a3-4109-b295-1e22318a2285", 00:14:46.329 "assigned_rate_limits": { 00:14:46.329 "rw_ios_per_sec": 0, 00:14:46.329 "rw_mbytes_per_sec": 0, 00:14:46.329 "r_mbytes_per_sec": 0, 00:14:46.329 "w_mbytes_per_sec": 0 00:14:46.329 }, 00:14:46.329 "claimed": true, 00:14:46.329 "claim_type": "exclusive_write", 00:14:46.329 "zoned": false, 00:14:46.329 "supported_io_types": { 00:14:46.329 "read": true, 00:14:46.329 "write": true, 00:14:46.329 "unmap": true, 00:14:46.329 "flush": true, 00:14:46.329 "reset": true, 00:14:46.329 "nvme_admin": false, 00:14:46.329 "nvme_io": false, 00:14:46.329 "nvme_io_md": false, 00:14:46.329 "write_zeroes": true, 00:14:46.329 "zcopy": true, 00:14:46.329 "get_zone_info": false, 00:14:46.329 "zone_management": false, 00:14:46.329 "zone_append": false, 00:14:46.329 "compare": false, 00:14:46.329 "compare_and_write": false, 00:14:46.329 "abort": true, 00:14:46.329 "seek_hole": false, 00:14:46.329 "seek_data": false, 00:14:46.329 "copy": true, 00:14:46.329 "nvme_iov_md": false 00:14:46.329 }, 00:14:46.329 "memory_domains": [ 00:14:46.329 { 00:14:46.329 "dma_device_id": "system", 00:14:46.329 "dma_device_type": 1 00:14:46.329 }, 00:14:46.329 { 00:14:46.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.329 "dma_device_type": 2 00:14:46.329 } 00:14:46.329 ], 00:14:46.329 "driver_specific": {} 00:14:46.329 } 00:14:46.329 ] 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.329 "name": "Existed_Raid", 00:14:46.329 "uuid": "22c46668-5e6e-447c-875b-7dc59e2afb6f", 00:14:46.329 "strip_size_kb": 64, 00:14:46.329 "state": "configuring", 00:14:46.329 "raid_level": "raid5f", 00:14:46.329 "superblock": true, 00:14:46.329 "num_base_bdevs": 4, 00:14:46.329 "num_base_bdevs_discovered": 2, 00:14:46.329 "num_base_bdevs_operational": 4, 00:14:46.329 "base_bdevs_list": [ 00:14:46.329 { 00:14:46.329 "name": "BaseBdev1", 00:14:46.329 "uuid": "699fe6d5-f07d-466d-af93-feb639ad16e4", 00:14:46.329 "is_configured": true, 00:14:46.329 "data_offset": 2048, 00:14:46.329 "data_size": 63488 00:14:46.329 }, 00:14:46.329 { 00:14:46.329 "name": "BaseBdev2", 00:14:46.329 "uuid": "b020215a-95a3-4109-b295-1e22318a2285", 00:14:46.329 "is_configured": true, 00:14:46.329 "data_offset": 2048, 00:14:46.329 "data_size": 63488 00:14:46.329 }, 00:14:46.329 { 00:14:46.329 "name": "BaseBdev3", 00:14:46.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.329 "is_configured": false, 00:14:46.329 "data_offset": 0, 00:14:46.329 "data_size": 0 00:14:46.329 }, 00:14:46.329 { 00:14:46.329 "name": "BaseBdev4", 00:14:46.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.329 "is_configured": false, 00:14:46.329 "data_offset": 0, 00:14:46.329 "data_size": 0 00:14:46.329 } 00:14:46.329 ] 00:14:46.329 }' 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.329 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.898 [2024-10-09 01:34:45.595613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.898 BaseBdev3 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.898 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.899 [ 00:14:46.899 { 00:14:46.899 "name": "BaseBdev3", 00:14:46.899 "aliases": [ 00:14:46.899 "bfa0ef19-a8f8-4d67-be30-0c747bf07208" 00:14:46.899 ], 00:14:46.899 "product_name": "Malloc disk", 00:14:46.899 "block_size": 512, 00:14:46.899 "num_blocks": 65536, 00:14:46.899 "uuid": "bfa0ef19-a8f8-4d67-be30-0c747bf07208", 00:14:46.899 "assigned_rate_limits": { 00:14:46.899 "rw_ios_per_sec": 0, 00:14:46.899 "rw_mbytes_per_sec": 0, 00:14:46.899 "r_mbytes_per_sec": 0, 00:14:46.899 "w_mbytes_per_sec": 0 00:14:46.899 }, 00:14:46.899 "claimed": true, 00:14:46.899 "claim_type": "exclusive_write", 00:14:46.899 "zoned": false, 00:14:46.899 "supported_io_types": { 00:14:46.899 "read": true, 00:14:46.899 "write": true, 00:14:46.899 "unmap": true, 00:14:46.899 "flush": true, 00:14:46.899 "reset": true, 00:14:46.899 "nvme_admin": false, 00:14:46.899 "nvme_io": false, 00:14:46.899 "nvme_io_md": false, 00:14:46.899 "write_zeroes": true, 00:14:46.899 "zcopy": true, 00:14:46.899 "get_zone_info": false, 00:14:46.899 "zone_management": false, 00:14:46.899 "zone_append": false, 00:14:46.899 "compare": false, 00:14:46.899 "compare_and_write": false, 00:14:46.899 "abort": true, 00:14:46.899 "seek_hole": false, 00:14:46.899 "seek_data": false, 00:14:46.899 "copy": true, 00:14:46.899 "nvme_iov_md": false 00:14:46.899 }, 00:14:46.899 "memory_domains": [ 00:14:46.899 { 00:14:46.899 "dma_device_id": "system", 00:14:46.899 "dma_device_type": 1 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.899 "dma_device_type": 2 00:14:46.899 } 00:14:46.899 ], 00:14:46.899 "driver_specific": {} 00:14:46.899 } 00:14:46.899 ] 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.899 "name": "Existed_Raid", 00:14:46.899 "uuid": "22c46668-5e6e-447c-875b-7dc59e2afb6f", 00:14:46.899 "strip_size_kb": 64, 00:14:46.899 "state": "configuring", 00:14:46.899 "raid_level": "raid5f", 00:14:46.899 "superblock": true, 00:14:46.899 "num_base_bdevs": 4, 00:14:46.899 "num_base_bdevs_discovered": 3, 00:14:46.899 "num_base_bdevs_operational": 4, 00:14:46.899 "base_bdevs_list": [ 00:14:46.899 { 00:14:46.899 "name": "BaseBdev1", 00:14:46.899 "uuid": "699fe6d5-f07d-466d-af93-feb639ad16e4", 00:14:46.899 "is_configured": true, 00:14:46.899 "data_offset": 2048, 00:14:46.899 "data_size": 63488 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "name": "BaseBdev2", 00:14:46.899 "uuid": "b020215a-95a3-4109-b295-1e22318a2285", 00:14:46.899 "is_configured": true, 00:14:46.899 "data_offset": 2048, 00:14:46.899 "data_size": 63488 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "name": "BaseBdev3", 00:14:46.899 "uuid": "bfa0ef19-a8f8-4d67-be30-0c747bf07208", 00:14:46.899 "is_configured": true, 00:14:46.899 "data_offset": 2048, 00:14:46.899 "data_size": 63488 00:14:46.899 }, 00:14:46.899 { 00:14:46.899 "name": "BaseBdev4", 00:14:46.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.899 "is_configured": false, 00:14:46.899 "data_offset": 0, 00:14:46.899 "data_size": 0 00:14:46.899 } 00:14:46.899 ] 00:14:46.899 }' 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.899 01:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.162 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:47.162 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.162 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.422 [2024-10-09 01:34:46.060420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.422 [2024-10-09 01:34:46.060649] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:47.422 [2024-10-09 01:34:46.060679] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:47.422 [2024-10-09 01:34:46.061000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:47.422 BaseBdev4 00:14:47.422 [2024-10-09 01:34:46.061496] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:47.422 [2024-10-09 01:34:46.061508] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:47.422 [2024-10-09 01:34:46.061679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.422 [ 00:14:47.422 { 00:14:47.422 "name": "BaseBdev4", 00:14:47.422 "aliases": [ 00:14:47.422 "9a052930-2c5d-44c9-9cf6-606c23d929c9" 00:14:47.422 ], 00:14:47.422 "product_name": "Malloc disk", 00:14:47.422 "block_size": 512, 00:14:47.422 "num_blocks": 65536, 00:14:47.422 "uuid": "9a052930-2c5d-44c9-9cf6-606c23d929c9", 00:14:47.422 "assigned_rate_limits": { 00:14:47.422 "rw_ios_per_sec": 0, 00:14:47.422 "rw_mbytes_per_sec": 0, 00:14:47.422 "r_mbytes_per_sec": 0, 00:14:47.422 "w_mbytes_per_sec": 0 00:14:47.422 }, 00:14:47.422 "claimed": true, 00:14:47.422 "claim_type": "exclusive_write", 00:14:47.422 "zoned": false, 00:14:47.422 "supported_io_types": { 00:14:47.422 "read": true, 00:14:47.422 "write": true, 00:14:47.422 "unmap": true, 00:14:47.422 "flush": true, 00:14:47.422 "reset": true, 00:14:47.422 "nvme_admin": false, 00:14:47.422 "nvme_io": false, 00:14:47.422 "nvme_io_md": false, 00:14:47.422 "write_zeroes": true, 00:14:47.422 "zcopy": true, 00:14:47.422 "get_zone_info": false, 00:14:47.422 "zone_management": false, 00:14:47.422 "zone_append": false, 00:14:47.422 "compare": false, 00:14:47.422 "compare_and_write": false, 00:14:47.422 "abort": true, 00:14:47.422 "seek_hole": false, 00:14:47.422 "seek_data": false, 00:14:47.422 "copy": true, 00:14:47.422 "nvme_iov_md": false 00:14:47.422 }, 00:14:47.422 "memory_domains": [ 00:14:47.422 { 00:14:47.422 "dma_device_id": "system", 00:14:47.422 "dma_device_type": 1 00:14:47.422 }, 00:14:47.422 { 00:14:47.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.422 "dma_device_type": 2 00:14:47.422 } 00:14:47.422 ], 00:14:47.422 "driver_specific": {} 00:14:47.422 } 00:14:47.422 ] 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.422 "name": "Existed_Raid", 00:14:47.422 "uuid": "22c46668-5e6e-447c-875b-7dc59e2afb6f", 00:14:47.422 "strip_size_kb": 64, 00:14:47.422 "state": "online", 00:14:47.422 "raid_level": "raid5f", 00:14:47.422 "superblock": true, 00:14:47.422 "num_base_bdevs": 4, 00:14:47.422 "num_base_bdevs_discovered": 4, 00:14:47.422 "num_base_bdevs_operational": 4, 00:14:47.422 "base_bdevs_list": [ 00:14:47.422 { 00:14:47.422 "name": "BaseBdev1", 00:14:47.422 "uuid": "699fe6d5-f07d-466d-af93-feb639ad16e4", 00:14:47.422 "is_configured": true, 00:14:47.422 "data_offset": 2048, 00:14:47.422 "data_size": 63488 00:14:47.422 }, 00:14:47.422 { 00:14:47.422 "name": "BaseBdev2", 00:14:47.422 "uuid": "b020215a-95a3-4109-b295-1e22318a2285", 00:14:47.422 "is_configured": true, 00:14:47.422 "data_offset": 2048, 00:14:47.422 "data_size": 63488 00:14:47.422 }, 00:14:47.422 { 00:14:47.422 "name": "BaseBdev3", 00:14:47.422 "uuid": "bfa0ef19-a8f8-4d67-be30-0c747bf07208", 00:14:47.422 "is_configured": true, 00:14:47.422 "data_offset": 2048, 00:14:47.422 "data_size": 63488 00:14:47.422 }, 00:14:47.422 { 00:14:47.422 "name": "BaseBdev4", 00:14:47.422 "uuid": "9a052930-2c5d-44c9-9cf6-606c23d929c9", 00:14:47.422 "is_configured": true, 00:14:47.422 "data_offset": 2048, 00:14:47.422 "data_size": 63488 00:14:47.422 } 00:14:47.422 ] 00:14:47.422 }' 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.422 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.681 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.682 [2024-10-09 01:34:46.560806] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.941 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.941 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.941 "name": "Existed_Raid", 00:14:47.941 "aliases": [ 00:14:47.941 "22c46668-5e6e-447c-875b-7dc59e2afb6f" 00:14:47.941 ], 00:14:47.941 "product_name": "Raid Volume", 00:14:47.941 "block_size": 512, 00:14:47.941 "num_blocks": 190464, 00:14:47.941 "uuid": "22c46668-5e6e-447c-875b-7dc59e2afb6f", 00:14:47.941 "assigned_rate_limits": { 00:14:47.941 "rw_ios_per_sec": 0, 00:14:47.941 "rw_mbytes_per_sec": 0, 00:14:47.941 "r_mbytes_per_sec": 0, 00:14:47.941 "w_mbytes_per_sec": 0 00:14:47.941 }, 00:14:47.941 "claimed": false, 00:14:47.941 "zoned": false, 00:14:47.941 "supported_io_types": { 00:14:47.941 "read": true, 00:14:47.941 "write": true, 00:14:47.941 "unmap": false, 00:14:47.941 "flush": false, 00:14:47.941 "reset": true, 00:14:47.941 "nvme_admin": false, 00:14:47.941 "nvme_io": false, 00:14:47.941 "nvme_io_md": false, 00:14:47.941 "write_zeroes": true, 00:14:47.941 "zcopy": false, 00:14:47.941 "get_zone_info": false, 00:14:47.941 "zone_management": false, 00:14:47.941 "zone_append": false, 00:14:47.941 "compare": false, 00:14:47.941 "compare_and_write": false, 00:14:47.941 "abort": false, 00:14:47.941 "seek_hole": false, 00:14:47.941 "seek_data": false, 00:14:47.941 "copy": false, 00:14:47.941 "nvme_iov_md": false 00:14:47.941 }, 00:14:47.941 "driver_specific": { 00:14:47.941 "raid": { 00:14:47.941 "uuid": "22c46668-5e6e-447c-875b-7dc59e2afb6f", 00:14:47.941 "strip_size_kb": 64, 00:14:47.941 "state": "online", 00:14:47.941 "raid_level": "raid5f", 00:14:47.941 "superblock": true, 00:14:47.941 "num_base_bdevs": 4, 00:14:47.941 "num_base_bdevs_discovered": 4, 00:14:47.941 "num_base_bdevs_operational": 4, 00:14:47.941 "base_bdevs_list": [ 00:14:47.941 { 00:14:47.941 "name": "BaseBdev1", 00:14:47.941 "uuid": "699fe6d5-f07d-466d-af93-feb639ad16e4", 00:14:47.941 "is_configured": true, 00:14:47.941 "data_offset": 2048, 00:14:47.941 "data_size": 63488 00:14:47.941 }, 00:14:47.941 { 00:14:47.942 "name": "BaseBdev2", 00:14:47.942 "uuid": "b020215a-95a3-4109-b295-1e22318a2285", 00:14:47.942 "is_configured": true, 00:14:47.942 "data_offset": 2048, 00:14:47.942 "data_size": 63488 00:14:47.942 }, 00:14:47.942 { 00:14:47.942 "name": "BaseBdev3", 00:14:47.942 "uuid": "bfa0ef19-a8f8-4d67-be30-0c747bf07208", 00:14:47.942 "is_configured": true, 00:14:47.942 "data_offset": 2048, 00:14:47.942 "data_size": 63488 00:14:47.942 }, 00:14:47.942 { 00:14:47.942 "name": "BaseBdev4", 00:14:47.942 "uuid": "9a052930-2c5d-44c9-9cf6-606c23d929c9", 00:14:47.942 "is_configured": true, 00:14:47.942 "data_offset": 2048, 00:14:47.942 "data_size": 63488 00:14:47.942 } 00:14:47.942 ] 00:14:47.942 } 00:14:47.942 } 00:14:47.942 }' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:47.942 BaseBdev2 00:14:47.942 BaseBdev3 00:14:47.942 BaseBdev4' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.942 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.202 [2024-10-09 01:34:46.860772] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.202 "name": "Existed_Raid", 00:14:48.202 "uuid": "22c46668-5e6e-447c-875b-7dc59e2afb6f", 00:14:48.202 "strip_size_kb": 64, 00:14:48.202 "state": "online", 00:14:48.202 "raid_level": "raid5f", 00:14:48.202 "superblock": true, 00:14:48.202 "num_base_bdevs": 4, 00:14:48.202 "num_base_bdevs_discovered": 3, 00:14:48.202 "num_base_bdevs_operational": 3, 00:14:48.202 "base_bdevs_list": [ 00:14:48.202 { 00:14:48.202 "name": null, 00:14:48.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.202 "is_configured": false, 00:14:48.202 "data_offset": 0, 00:14:48.202 "data_size": 63488 00:14:48.202 }, 00:14:48.202 { 00:14:48.202 "name": "BaseBdev2", 00:14:48.202 "uuid": "b020215a-95a3-4109-b295-1e22318a2285", 00:14:48.202 "is_configured": true, 00:14:48.202 "data_offset": 2048, 00:14:48.202 "data_size": 63488 00:14:48.202 }, 00:14:48.202 { 00:14:48.202 "name": "BaseBdev3", 00:14:48.202 "uuid": "bfa0ef19-a8f8-4d67-be30-0c747bf07208", 00:14:48.202 "is_configured": true, 00:14:48.202 "data_offset": 2048, 00:14:48.202 "data_size": 63488 00:14:48.202 }, 00:14:48.202 { 00:14:48.202 "name": "BaseBdev4", 00:14:48.202 "uuid": "9a052930-2c5d-44c9-9cf6-606c23d929c9", 00:14:48.202 "is_configured": true, 00:14:48.202 "data_offset": 2048, 00:14:48.202 "data_size": 63488 00:14:48.202 } 00:14:48.202 ] 00:14:48.202 }' 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.202 01:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.462 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:48.462 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.462 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.462 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.462 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.462 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.722 [2024-10-09 01:34:47.365708] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.722 [2024-10-09 01:34:47.365868] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.722 [2024-10-09 01:34:47.386204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.722 [2024-10-09 01:34:47.446235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.722 [2024-10-09 01:34:47.526556] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:48.722 [2024-10-09 01:34:47.526656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.722 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.982 BaseBdev2 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.982 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.982 [ 00:14:48.982 { 00:14:48.982 "name": "BaseBdev2", 00:14:48.982 "aliases": [ 00:14:48.982 "9e664044-c846-4567-8f53-b07c97c91a40" 00:14:48.982 ], 00:14:48.982 "product_name": "Malloc disk", 00:14:48.982 "block_size": 512, 00:14:48.983 "num_blocks": 65536, 00:14:48.983 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:48.983 "assigned_rate_limits": { 00:14:48.983 "rw_ios_per_sec": 0, 00:14:48.983 "rw_mbytes_per_sec": 0, 00:14:48.983 "r_mbytes_per_sec": 0, 00:14:48.983 "w_mbytes_per_sec": 0 00:14:48.983 }, 00:14:48.983 "claimed": false, 00:14:48.983 "zoned": false, 00:14:48.983 "supported_io_types": { 00:14:48.983 "read": true, 00:14:48.983 "write": true, 00:14:48.983 "unmap": true, 00:14:48.983 "flush": true, 00:14:48.983 "reset": true, 00:14:48.983 "nvme_admin": false, 00:14:48.983 "nvme_io": false, 00:14:48.983 "nvme_io_md": false, 00:14:48.983 "write_zeroes": true, 00:14:48.983 "zcopy": true, 00:14:48.983 "get_zone_info": false, 00:14:48.983 "zone_management": false, 00:14:48.983 "zone_append": false, 00:14:48.983 "compare": false, 00:14:48.983 "compare_and_write": false, 00:14:48.983 "abort": true, 00:14:48.983 "seek_hole": false, 00:14:48.983 "seek_data": false, 00:14:48.983 "copy": true, 00:14:48.983 "nvme_iov_md": false 00:14:48.983 }, 00:14:48.983 "memory_domains": [ 00:14:48.983 { 00:14:48.983 "dma_device_id": "system", 00:14:48.983 "dma_device_type": 1 00:14:48.983 }, 00:14:48.983 { 00:14:48.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.983 "dma_device_type": 2 00:14:48.983 } 00:14:48.983 ], 00:14:48.983 "driver_specific": {} 00:14:48.983 } 00:14:48.983 ] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.983 BaseBdev3 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.983 [ 00:14:48.983 { 00:14:48.983 "name": "BaseBdev3", 00:14:48.983 "aliases": [ 00:14:48.983 "c4f2b584-9a37-4265-8b97-663a9eb7ea6e" 00:14:48.983 ], 00:14:48.983 "product_name": "Malloc disk", 00:14:48.983 "block_size": 512, 00:14:48.983 "num_blocks": 65536, 00:14:48.983 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:48.983 "assigned_rate_limits": { 00:14:48.983 "rw_ios_per_sec": 0, 00:14:48.983 "rw_mbytes_per_sec": 0, 00:14:48.983 "r_mbytes_per_sec": 0, 00:14:48.983 "w_mbytes_per_sec": 0 00:14:48.983 }, 00:14:48.983 "claimed": false, 00:14:48.983 "zoned": false, 00:14:48.983 "supported_io_types": { 00:14:48.983 "read": true, 00:14:48.983 "write": true, 00:14:48.983 "unmap": true, 00:14:48.983 "flush": true, 00:14:48.983 "reset": true, 00:14:48.983 "nvme_admin": false, 00:14:48.983 "nvme_io": false, 00:14:48.983 "nvme_io_md": false, 00:14:48.983 "write_zeroes": true, 00:14:48.983 "zcopy": true, 00:14:48.983 "get_zone_info": false, 00:14:48.983 "zone_management": false, 00:14:48.983 "zone_append": false, 00:14:48.983 "compare": false, 00:14:48.983 "compare_and_write": false, 00:14:48.983 "abort": true, 00:14:48.983 "seek_hole": false, 00:14:48.983 "seek_data": false, 00:14:48.983 "copy": true, 00:14:48.983 "nvme_iov_md": false 00:14:48.983 }, 00:14:48.983 "memory_domains": [ 00:14:48.983 { 00:14:48.983 "dma_device_id": "system", 00:14:48.983 "dma_device_type": 1 00:14:48.983 }, 00:14:48.983 { 00:14:48.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.983 "dma_device_type": 2 00:14:48.983 } 00:14:48.983 ], 00:14:48.983 "driver_specific": {} 00:14:48.983 } 00:14:48.983 ] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.983 BaseBdev4 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.983 [ 00:14:48.983 { 00:14:48.983 "name": "BaseBdev4", 00:14:48.983 "aliases": [ 00:14:48.983 "c8954ffc-f9af-41eb-aeca-614429c4cc12" 00:14:48.983 ], 00:14:48.983 "product_name": "Malloc disk", 00:14:48.983 "block_size": 512, 00:14:48.983 "num_blocks": 65536, 00:14:48.983 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:48.983 "assigned_rate_limits": { 00:14:48.983 "rw_ios_per_sec": 0, 00:14:48.983 "rw_mbytes_per_sec": 0, 00:14:48.983 "r_mbytes_per_sec": 0, 00:14:48.983 "w_mbytes_per_sec": 0 00:14:48.983 }, 00:14:48.983 "claimed": false, 00:14:48.983 "zoned": false, 00:14:48.983 "supported_io_types": { 00:14:48.983 "read": true, 00:14:48.983 "write": true, 00:14:48.983 "unmap": true, 00:14:48.983 "flush": true, 00:14:48.983 "reset": true, 00:14:48.983 "nvme_admin": false, 00:14:48.983 "nvme_io": false, 00:14:48.983 "nvme_io_md": false, 00:14:48.983 "write_zeroes": true, 00:14:48.983 "zcopy": true, 00:14:48.983 "get_zone_info": false, 00:14:48.983 "zone_management": false, 00:14:48.983 "zone_append": false, 00:14:48.983 "compare": false, 00:14:48.983 "compare_and_write": false, 00:14:48.983 "abort": true, 00:14:48.983 "seek_hole": false, 00:14:48.983 "seek_data": false, 00:14:48.983 "copy": true, 00:14:48.983 "nvme_iov_md": false 00:14:48.983 }, 00:14:48.983 "memory_domains": [ 00:14:48.983 { 00:14:48.983 "dma_device_id": "system", 00:14:48.983 "dma_device_type": 1 00:14:48.983 }, 00:14:48.983 { 00:14:48.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.983 "dma_device_type": 2 00:14:48.983 } 00:14:48.983 ], 00:14:48.983 "driver_specific": {} 00:14:48.983 } 00:14:48.983 ] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.983 [2024-10-09 01:34:47.777588] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.983 [2024-10-09 01:34:47.777715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.983 [2024-10-09 01:34:47.777754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.983 [2024-10-09 01:34:47.779781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.983 [2024-10-09 01:34:47.779865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.983 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.984 "name": "Existed_Raid", 00:14:48.984 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:48.984 "strip_size_kb": 64, 00:14:48.984 "state": "configuring", 00:14:48.984 "raid_level": "raid5f", 00:14:48.984 "superblock": true, 00:14:48.984 "num_base_bdevs": 4, 00:14:48.984 "num_base_bdevs_discovered": 3, 00:14:48.984 "num_base_bdevs_operational": 4, 00:14:48.984 "base_bdevs_list": [ 00:14:48.984 { 00:14:48.984 "name": "BaseBdev1", 00:14:48.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.984 "is_configured": false, 00:14:48.984 "data_offset": 0, 00:14:48.984 "data_size": 0 00:14:48.984 }, 00:14:48.984 { 00:14:48.984 "name": "BaseBdev2", 00:14:48.984 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:48.984 "is_configured": true, 00:14:48.984 "data_offset": 2048, 00:14:48.984 "data_size": 63488 00:14:48.984 }, 00:14:48.984 { 00:14:48.984 "name": "BaseBdev3", 00:14:48.984 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:48.984 "is_configured": true, 00:14:48.984 "data_offset": 2048, 00:14:48.984 "data_size": 63488 00:14:48.984 }, 00:14:48.984 { 00:14:48.984 "name": "BaseBdev4", 00:14:48.984 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:48.984 "is_configured": true, 00:14:48.984 "data_offset": 2048, 00:14:48.984 "data_size": 63488 00:14:48.984 } 00:14:48.984 ] 00:14:48.984 }' 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.984 01:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.570 [2024-10-09 01:34:48.213662] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.570 "name": "Existed_Raid", 00:14:49.570 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:49.570 "strip_size_kb": 64, 00:14:49.570 "state": "configuring", 00:14:49.570 "raid_level": "raid5f", 00:14:49.570 "superblock": true, 00:14:49.570 "num_base_bdevs": 4, 00:14:49.570 "num_base_bdevs_discovered": 2, 00:14:49.570 "num_base_bdevs_operational": 4, 00:14:49.570 "base_bdevs_list": [ 00:14:49.570 { 00:14:49.570 "name": "BaseBdev1", 00:14:49.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.570 "is_configured": false, 00:14:49.570 "data_offset": 0, 00:14:49.570 "data_size": 0 00:14:49.570 }, 00:14:49.570 { 00:14:49.570 "name": null, 00:14:49.570 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:49.570 "is_configured": false, 00:14:49.570 "data_offset": 0, 00:14:49.570 "data_size": 63488 00:14:49.570 }, 00:14:49.570 { 00:14:49.570 "name": "BaseBdev3", 00:14:49.570 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:49.570 "is_configured": true, 00:14:49.570 "data_offset": 2048, 00:14:49.570 "data_size": 63488 00:14:49.570 }, 00:14:49.570 { 00:14:49.570 "name": "BaseBdev4", 00:14:49.570 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:49.570 "is_configured": true, 00:14:49.570 "data_offset": 2048, 00:14:49.570 "data_size": 63488 00:14:49.570 } 00:14:49.570 ] 00:14:49.570 }' 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.570 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.847 [2024-10-09 01:34:48.702340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.847 BaseBdev1 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.847 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.848 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.848 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.848 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.848 [ 00:14:49.848 { 00:14:49.848 "name": "BaseBdev1", 00:14:49.848 "aliases": [ 00:14:49.848 "3b913205-c7fd-463d-a1fc-4a56b49bda65" 00:14:49.848 ], 00:14:49.848 "product_name": "Malloc disk", 00:14:49.848 "block_size": 512, 00:14:49.848 "num_blocks": 65536, 00:14:49.848 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:49.848 "assigned_rate_limits": { 00:14:49.848 "rw_ios_per_sec": 0, 00:14:49.848 "rw_mbytes_per_sec": 0, 00:14:49.848 "r_mbytes_per_sec": 0, 00:14:49.848 "w_mbytes_per_sec": 0 00:14:49.848 }, 00:14:49.848 "claimed": true, 00:14:49.848 "claim_type": "exclusive_write", 00:14:49.848 "zoned": false, 00:14:49.848 "supported_io_types": { 00:14:49.848 "read": true, 00:14:49.848 "write": true, 00:14:49.848 "unmap": true, 00:14:49.848 "flush": true, 00:14:49.848 "reset": true, 00:14:49.848 "nvme_admin": false, 00:14:49.848 "nvme_io": false, 00:14:49.848 "nvme_io_md": false, 00:14:49.848 "write_zeroes": true, 00:14:49.848 "zcopy": true, 00:14:49.848 "get_zone_info": false, 00:14:49.848 "zone_management": false, 00:14:49.848 "zone_append": false, 00:14:49.848 "compare": false, 00:14:49.848 "compare_and_write": false, 00:14:49.848 "abort": true, 00:14:49.848 "seek_hole": false, 00:14:49.848 "seek_data": false, 00:14:49.848 "copy": true, 00:14:49.848 "nvme_iov_md": false 00:14:49.848 }, 00:14:49.848 "memory_domains": [ 00:14:49.848 { 00:14:49.848 "dma_device_id": "system", 00:14:49.848 "dma_device_type": 1 00:14:49.848 }, 00:14:49.848 { 00:14:49.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.848 "dma_device_type": 2 00:14:49.848 } 00:14:49.848 ], 00:14:49.848 "driver_specific": {} 00:14:49.848 } 00:14:49.848 ] 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.108 "name": "Existed_Raid", 00:14:50.108 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:50.108 "strip_size_kb": 64, 00:14:50.108 "state": "configuring", 00:14:50.108 "raid_level": "raid5f", 00:14:50.108 "superblock": true, 00:14:50.108 "num_base_bdevs": 4, 00:14:50.108 "num_base_bdevs_discovered": 3, 00:14:50.108 "num_base_bdevs_operational": 4, 00:14:50.108 "base_bdevs_list": [ 00:14:50.108 { 00:14:50.108 "name": "BaseBdev1", 00:14:50.108 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:50.108 "is_configured": true, 00:14:50.108 "data_offset": 2048, 00:14:50.108 "data_size": 63488 00:14:50.108 }, 00:14:50.108 { 00:14:50.108 "name": null, 00:14:50.108 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:50.108 "is_configured": false, 00:14:50.108 "data_offset": 0, 00:14:50.108 "data_size": 63488 00:14:50.108 }, 00:14:50.108 { 00:14:50.108 "name": "BaseBdev3", 00:14:50.108 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:50.108 "is_configured": true, 00:14:50.108 "data_offset": 2048, 00:14:50.108 "data_size": 63488 00:14:50.108 }, 00:14:50.108 { 00:14:50.108 "name": "BaseBdev4", 00:14:50.108 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:50.108 "is_configured": true, 00:14:50.108 "data_offset": 2048, 00:14:50.108 "data_size": 63488 00:14:50.108 } 00:14:50.108 ] 00:14:50.108 }' 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.108 01:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.368 [2024-10-09 01:34:49.246534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.368 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.628 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.628 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.628 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.628 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.628 "name": "Existed_Raid", 00:14:50.628 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:50.628 "strip_size_kb": 64, 00:14:50.628 "state": "configuring", 00:14:50.628 "raid_level": "raid5f", 00:14:50.628 "superblock": true, 00:14:50.628 "num_base_bdevs": 4, 00:14:50.628 "num_base_bdevs_discovered": 2, 00:14:50.628 "num_base_bdevs_operational": 4, 00:14:50.628 "base_bdevs_list": [ 00:14:50.628 { 00:14:50.628 "name": "BaseBdev1", 00:14:50.628 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:50.628 "is_configured": true, 00:14:50.628 "data_offset": 2048, 00:14:50.628 "data_size": 63488 00:14:50.628 }, 00:14:50.628 { 00:14:50.628 "name": null, 00:14:50.628 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:50.628 "is_configured": false, 00:14:50.628 "data_offset": 0, 00:14:50.628 "data_size": 63488 00:14:50.628 }, 00:14:50.628 { 00:14:50.628 "name": null, 00:14:50.628 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:50.628 "is_configured": false, 00:14:50.628 "data_offset": 0, 00:14:50.628 "data_size": 63488 00:14:50.628 }, 00:14:50.628 { 00:14:50.628 "name": "BaseBdev4", 00:14:50.628 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:50.628 "is_configured": true, 00:14:50.628 "data_offset": 2048, 00:14:50.628 "data_size": 63488 00:14:50.628 } 00:14:50.628 ] 00:14:50.628 }' 00:14:50.628 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.628 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.888 [2024-10-09 01:34:49.710679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.888 "name": "Existed_Raid", 00:14:50.888 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:50.888 "strip_size_kb": 64, 00:14:50.888 "state": "configuring", 00:14:50.888 "raid_level": "raid5f", 00:14:50.888 "superblock": true, 00:14:50.888 "num_base_bdevs": 4, 00:14:50.888 "num_base_bdevs_discovered": 3, 00:14:50.888 "num_base_bdevs_operational": 4, 00:14:50.888 "base_bdevs_list": [ 00:14:50.888 { 00:14:50.888 "name": "BaseBdev1", 00:14:50.888 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:50.888 "is_configured": true, 00:14:50.888 "data_offset": 2048, 00:14:50.888 "data_size": 63488 00:14:50.888 }, 00:14:50.888 { 00:14:50.888 "name": null, 00:14:50.888 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:50.888 "is_configured": false, 00:14:50.888 "data_offset": 0, 00:14:50.888 "data_size": 63488 00:14:50.888 }, 00:14:50.888 { 00:14:50.888 "name": "BaseBdev3", 00:14:50.888 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:50.888 "is_configured": true, 00:14:50.888 "data_offset": 2048, 00:14:50.888 "data_size": 63488 00:14:50.888 }, 00:14:50.888 { 00:14:50.888 "name": "BaseBdev4", 00:14:50.888 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:50.888 "is_configured": true, 00:14:50.888 "data_offset": 2048, 00:14:50.888 "data_size": 63488 00:14:50.888 } 00:14:50.888 ] 00:14:50.888 }' 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.888 01:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.457 [2024-10-09 01:34:50.218839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.457 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.458 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.458 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.458 "name": "Existed_Raid", 00:14:51.458 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:51.458 "strip_size_kb": 64, 00:14:51.458 "state": "configuring", 00:14:51.458 "raid_level": "raid5f", 00:14:51.458 "superblock": true, 00:14:51.458 "num_base_bdevs": 4, 00:14:51.458 "num_base_bdevs_discovered": 2, 00:14:51.458 "num_base_bdevs_operational": 4, 00:14:51.458 "base_bdevs_list": [ 00:14:51.458 { 00:14:51.458 "name": null, 00:14:51.458 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:51.458 "is_configured": false, 00:14:51.458 "data_offset": 0, 00:14:51.458 "data_size": 63488 00:14:51.458 }, 00:14:51.458 { 00:14:51.458 "name": null, 00:14:51.458 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:51.458 "is_configured": false, 00:14:51.458 "data_offset": 0, 00:14:51.458 "data_size": 63488 00:14:51.458 }, 00:14:51.458 { 00:14:51.458 "name": "BaseBdev3", 00:14:51.458 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:51.458 "is_configured": true, 00:14:51.458 "data_offset": 2048, 00:14:51.458 "data_size": 63488 00:14:51.458 }, 00:14:51.458 { 00:14:51.458 "name": "BaseBdev4", 00:14:51.458 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:51.458 "is_configured": true, 00:14:51.458 "data_offset": 2048, 00:14:51.458 "data_size": 63488 00:14:51.458 } 00:14:51.458 ] 00:14:51.458 }' 00:14:51.458 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.458 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.027 [2024-10-09 01:34:50.738816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.027 "name": "Existed_Raid", 00:14:52.027 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:52.027 "strip_size_kb": 64, 00:14:52.027 "state": "configuring", 00:14:52.027 "raid_level": "raid5f", 00:14:52.027 "superblock": true, 00:14:52.027 "num_base_bdevs": 4, 00:14:52.027 "num_base_bdevs_discovered": 3, 00:14:52.027 "num_base_bdevs_operational": 4, 00:14:52.027 "base_bdevs_list": [ 00:14:52.027 { 00:14:52.027 "name": null, 00:14:52.027 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:52.027 "is_configured": false, 00:14:52.027 "data_offset": 0, 00:14:52.027 "data_size": 63488 00:14:52.027 }, 00:14:52.027 { 00:14:52.027 "name": "BaseBdev2", 00:14:52.027 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:52.027 "is_configured": true, 00:14:52.027 "data_offset": 2048, 00:14:52.027 "data_size": 63488 00:14:52.027 }, 00:14:52.027 { 00:14:52.027 "name": "BaseBdev3", 00:14:52.027 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:52.027 "is_configured": true, 00:14:52.027 "data_offset": 2048, 00:14:52.027 "data_size": 63488 00:14:52.027 }, 00:14:52.027 { 00:14:52.027 "name": "BaseBdev4", 00:14:52.027 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:52.027 "is_configured": true, 00:14:52.027 "data_offset": 2048, 00:14:52.027 "data_size": 63488 00:14:52.027 } 00:14:52.027 ] 00:14:52.027 }' 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.027 01:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.287 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b913205-c7fd-463d-a1fc-4a56b49bda65 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.547 [2024-10-09 01:34:51.229955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:52.547 [2024-10-09 01:34:51.230196] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:52.547 [2024-10-09 01:34:51.230236] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:52.547 [2024-10-09 01:34:51.230535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:52.547 NewBaseBdev 00:14:52.547 [2024-10-09 01:34:51.231035] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:52.547 [2024-10-09 01:34:51.231088] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:52.547 [2024-10-09 01:34:51.231230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.547 [ 00:14:52.547 { 00:14:52.547 "name": "NewBaseBdev", 00:14:52.547 "aliases": [ 00:14:52.547 "3b913205-c7fd-463d-a1fc-4a56b49bda65" 00:14:52.547 ], 00:14:52.547 "product_name": "Malloc disk", 00:14:52.547 "block_size": 512, 00:14:52.547 "num_blocks": 65536, 00:14:52.547 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:52.547 "assigned_rate_limits": { 00:14:52.547 "rw_ios_per_sec": 0, 00:14:52.547 "rw_mbytes_per_sec": 0, 00:14:52.547 "r_mbytes_per_sec": 0, 00:14:52.547 "w_mbytes_per_sec": 0 00:14:52.547 }, 00:14:52.547 "claimed": true, 00:14:52.547 "claim_type": "exclusive_write", 00:14:52.547 "zoned": false, 00:14:52.547 "supported_io_types": { 00:14:52.547 "read": true, 00:14:52.547 "write": true, 00:14:52.547 "unmap": true, 00:14:52.547 "flush": true, 00:14:52.547 "reset": true, 00:14:52.547 "nvme_admin": false, 00:14:52.547 "nvme_io": false, 00:14:52.547 "nvme_io_md": false, 00:14:52.547 "write_zeroes": true, 00:14:52.547 "zcopy": true, 00:14:52.547 "get_zone_info": false, 00:14:52.547 "zone_management": false, 00:14:52.547 "zone_append": false, 00:14:52.547 "compare": false, 00:14:52.547 "compare_and_write": false, 00:14:52.547 "abort": true, 00:14:52.547 "seek_hole": false, 00:14:52.547 "seek_data": false, 00:14:52.547 "copy": true, 00:14:52.547 "nvme_iov_md": false 00:14:52.547 }, 00:14:52.547 "memory_domains": [ 00:14:52.547 { 00:14:52.547 "dma_device_id": "system", 00:14:52.547 "dma_device_type": 1 00:14:52.547 }, 00:14:52.547 { 00:14:52.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.547 "dma_device_type": 2 00:14:52.547 } 00:14:52.547 ], 00:14:52.547 "driver_specific": {} 00:14:52.547 } 00:14:52.547 ] 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.547 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.547 "name": "Existed_Raid", 00:14:52.547 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:52.547 "strip_size_kb": 64, 00:14:52.547 "state": "online", 00:14:52.547 "raid_level": "raid5f", 00:14:52.547 "superblock": true, 00:14:52.547 "num_base_bdevs": 4, 00:14:52.547 "num_base_bdevs_discovered": 4, 00:14:52.547 "num_base_bdevs_operational": 4, 00:14:52.547 "base_bdevs_list": [ 00:14:52.547 { 00:14:52.547 "name": "NewBaseBdev", 00:14:52.547 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:52.547 "is_configured": true, 00:14:52.547 "data_offset": 2048, 00:14:52.547 "data_size": 63488 00:14:52.547 }, 00:14:52.547 { 00:14:52.547 "name": "BaseBdev2", 00:14:52.547 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:52.547 "is_configured": true, 00:14:52.548 "data_offset": 2048, 00:14:52.548 "data_size": 63488 00:14:52.548 }, 00:14:52.548 { 00:14:52.548 "name": "BaseBdev3", 00:14:52.548 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:52.548 "is_configured": true, 00:14:52.548 "data_offset": 2048, 00:14:52.548 "data_size": 63488 00:14:52.548 }, 00:14:52.548 { 00:14:52.548 "name": "BaseBdev4", 00:14:52.548 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:52.548 "is_configured": true, 00:14:52.548 "data_offset": 2048, 00:14:52.548 "data_size": 63488 00:14:52.548 } 00:14:52.548 ] 00:14:52.548 }' 00:14:52.548 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.548 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.807 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.807 [2024-10-09 01:34:51.690278] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.067 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.067 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.067 "name": "Existed_Raid", 00:14:53.067 "aliases": [ 00:14:53.067 "1554adff-4c5a-48b4-97b8-6469abf959ea" 00:14:53.067 ], 00:14:53.067 "product_name": "Raid Volume", 00:14:53.067 "block_size": 512, 00:14:53.067 "num_blocks": 190464, 00:14:53.067 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:53.067 "assigned_rate_limits": { 00:14:53.067 "rw_ios_per_sec": 0, 00:14:53.067 "rw_mbytes_per_sec": 0, 00:14:53.067 "r_mbytes_per_sec": 0, 00:14:53.067 "w_mbytes_per_sec": 0 00:14:53.067 }, 00:14:53.067 "claimed": false, 00:14:53.067 "zoned": false, 00:14:53.067 "supported_io_types": { 00:14:53.067 "read": true, 00:14:53.067 "write": true, 00:14:53.067 "unmap": false, 00:14:53.067 "flush": false, 00:14:53.067 "reset": true, 00:14:53.067 "nvme_admin": false, 00:14:53.067 "nvme_io": false, 00:14:53.067 "nvme_io_md": false, 00:14:53.067 "write_zeroes": true, 00:14:53.067 "zcopy": false, 00:14:53.067 "get_zone_info": false, 00:14:53.067 "zone_management": false, 00:14:53.067 "zone_append": false, 00:14:53.067 "compare": false, 00:14:53.067 "compare_and_write": false, 00:14:53.067 "abort": false, 00:14:53.067 "seek_hole": false, 00:14:53.067 "seek_data": false, 00:14:53.067 "copy": false, 00:14:53.067 "nvme_iov_md": false 00:14:53.067 }, 00:14:53.067 "driver_specific": { 00:14:53.067 "raid": { 00:14:53.067 "uuid": "1554adff-4c5a-48b4-97b8-6469abf959ea", 00:14:53.067 "strip_size_kb": 64, 00:14:53.067 "state": "online", 00:14:53.067 "raid_level": "raid5f", 00:14:53.067 "superblock": true, 00:14:53.067 "num_base_bdevs": 4, 00:14:53.067 "num_base_bdevs_discovered": 4, 00:14:53.067 "num_base_bdevs_operational": 4, 00:14:53.067 "base_bdevs_list": [ 00:14:53.067 { 00:14:53.067 "name": "NewBaseBdev", 00:14:53.067 "uuid": "3b913205-c7fd-463d-a1fc-4a56b49bda65", 00:14:53.067 "is_configured": true, 00:14:53.067 "data_offset": 2048, 00:14:53.067 "data_size": 63488 00:14:53.067 }, 00:14:53.067 { 00:14:53.067 "name": "BaseBdev2", 00:14:53.067 "uuid": "9e664044-c846-4567-8f53-b07c97c91a40", 00:14:53.067 "is_configured": true, 00:14:53.067 "data_offset": 2048, 00:14:53.067 "data_size": 63488 00:14:53.067 }, 00:14:53.067 { 00:14:53.067 "name": "BaseBdev3", 00:14:53.067 "uuid": "c4f2b584-9a37-4265-8b97-663a9eb7ea6e", 00:14:53.067 "is_configured": true, 00:14:53.067 "data_offset": 2048, 00:14:53.067 "data_size": 63488 00:14:53.067 }, 00:14:53.067 { 00:14:53.067 "name": "BaseBdev4", 00:14:53.067 "uuid": "c8954ffc-f9af-41eb-aeca-614429c4cc12", 00:14:53.067 "is_configured": true, 00:14:53.067 "data_offset": 2048, 00:14:53.067 "data_size": 63488 00:14:53.067 } 00:14:53.067 ] 00:14:53.067 } 00:14:53.067 } 00:14:53.067 }' 00:14:53.067 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.067 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:53.068 BaseBdev2 00:14:53.068 BaseBdev3 00:14:53.068 BaseBdev4' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.068 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.328 [2024-10-09 01:34:51.982180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.328 [2024-10-09 01:34:51.982245] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.328 [2024-10-09 01:34:51.982328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.328 [2024-10-09 01:34:51.982620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.328 [2024-10-09 01:34:51.982677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 95054 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95054 ']' 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 95054 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.328 01:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95054 00:14:53.328 01:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.328 killing process with pid 95054 00:14:53.328 01:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.328 01:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95054' 00:14:53.328 01:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 95054 00:14:53.328 [2024-10-09 01:34:52.033966] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.328 01:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 95054 00:14:53.328 [2024-10-09 01:34:52.110284] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.899 01:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:53.899 00:14:53.899 real 0m9.734s 00:14:53.899 user 0m16.307s 00:14:53.899 sys 0m2.100s 00:14:53.899 01:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.899 ************************************ 00:14:53.899 END TEST raid5f_state_function_test_sb 00:14:53.899 ************************************ 00:14:53.899 01:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.899 01:34:52 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:53.899 01:34:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:53.899 01:34:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.899 01:34:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.899 ************************************ 00:14:53.899 START TEST raid5f_superblock_test 00:14:53.899 ************************************ 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=95708 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 95708 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 95708 ']' 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.899 01:34:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.899 [2024-10-09 01:34:52.663217] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:53.899 [2024-10-09 01:34:52.663437] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95708 ] 00:14:54.159 [2024-10-09 01:34:52.800512] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:54.159 [2024-10-09 01:34:52.829694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.159 [2024-10-09 01:34:52.900625] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.159 [2024-10-09 01:34:52.975859] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.159 [2024-10-09 01:34:52.975992] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.729 malloc1 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.729 [2024-10-09 01:34:53.498573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:54.729 [2024-10-09 01:34:53.498728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.729 [2024-10-09 01:34:53.498770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:54.729 [2024-10-09 01:34:53.498802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.729 [2024-10-09 01:34:53.501246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.729 [2024-10-09 01:34:53.501338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:54.729 pt1 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.729 malloc2 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.729 [2024-10-09 01:34:53.550776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.729 [2024-10-09 01:34:53.550880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.729 [2024-10-09 01:34:53.550918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:54.729 [2024-10-09 01:34:53.550940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.729 [2024-10-09 01:34:53.555760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.729 [2024-10-09 01:34:53.555934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.729 pt2 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.729 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.730 malloc3 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.730 [2024-10-09 01:34:53.588411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:54.730 [2024-10-09 01:34:53.588533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.730 [2024-10-09 01:34:53.588569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:54.730 [2024-10-09 01:34:53.588599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.730 [2024-10-09 01:34:53.591049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.730 [2024-10-09 01:34:53.591130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:54.730 pt3 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.730 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.730 malloc4 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.990 [2024-10-09 01:34:53.627320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:54.990 [2024-10-09 01:34:53.627426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.990 [2024-10-09 01:34:53.627467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:54.990 [2024-10-09 01:34:53.627496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.990 [2024-10-09 01:34:53.629951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.990 [2024-10-09 01:34:53.630021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:54.990 pt4 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.990 [2024-10-09 01:34:53.639367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:54.990 [2024-10-09 01:34:53.641487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.990 [2024-10-09 01:34:53.641613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:54.990 [2024-10-09 01:34:53.641699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:54.990 [2024-10-09 01:34:53.641947] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:54.990 [2024-10-09 01:34:53.641993] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:54.990 [2024-10-09 01:34:53.642279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:54.990 [2024-10-09 01:34:53.642782] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:54.990 [2024-10-09 01:34:53.642842] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:54.990 [2024-10-09 01:34:53.643006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.990 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.990 "name": "raid_bdev1", 00:14:54.990 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:54.990 "strip_size_kb": 64, 00:14:54.990 "state": "online", 00:14:54.990 "raid_level": "raid5f", 00:14:54.990 "superblock": true, 00:14:54.990 "num_base_bdevs": 4, 00:14:54.990 "num_base_bdevs_discovered": 4, 00:14:54.990 "num_base_bdevs_operational": 4, 00:14:54.990 "base_bdevs_list": [ 00:14:54.990 { 00:14:54.990 "name": "pt1", 00:14:54.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:54.990 "is_configured": true, 00:14:54.990 "data_offset": 2048, 00:14:54.990 "data_size": 63488 00:14:54.990 }, 00:14:54.990 { 00:14:54.990 "name": "pt2", 00:14:54.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.990 "is_configured": true, 00:14:54.990 "data_offset": 2048, 00:14:54.990 "data_size": 63488 00:14:54.990 }, 00:14:54.990 { 00:14:54.990 "name": "pt3", 00:14:54.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:54.990 "is_configured": true, 00:14:54.990 "data_offset": 2048, 00:14:54.990 "data_size": 63488 00:14:54.990 }, 00:14:54.990 { 00:14:54.990 "name": "pt4", 00:14:54.990 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:54.990 "is_configured": true, 00:14:54.991 "data_offset": 2048, 00:14:54.991 "data_size": 63488 00:14:54.991 } 00:14:54.991 ] 00:14:54.991 }' 00:14:54.991 01:34:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.991 01:34:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.250 [2024-10-09 01:34:54.054208] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.250 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:55.250 "name": "raid_bdev1", 00:14:55.250 "aliases": [ 00:14:55.250 "d12895e7-1cb5-4e8e-995b-79312e6a1187" 00:14:55.250 ], 00:14:55.250 "product_name": "Raid Volume", 00:14:55.250 "block_size": 512, 00:14:55.250 "num_blocks": 190464, 00:14:55.250 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:55.250 "assigned_rate_limits": { 00:14:55.250 "rw_ios_per_sec": 0, 00:14:55.250 "rw_mbytes_per_sec": 0, 00:14:55.250 "r_mbytes_per_sec": 0, 00:14:55.250 "w_mbytes_per_sec": 0 00:14:55.250 }, 00:14:55.250 "claimed": false, 00:14:55.250 "zoned": false, 00:14:55.250 "supported_io_types": { 00:14:55.250 "read": true, 00:14:55.250 "write": true, 00:14:55.250 "unmap": false, 00:14:55.250 "flush": false, 00:14:55.250 "reset": true, 00:14:55.250 "nvme_admin": false, 00:14:55.250 "nvme_io": false, 00:14:55.250 "nvme_io_md": false, 00:14:55.250 "write_zeroes": true, 00:14:55.250 "zcopy": false, 00:14:55.250 "get_zone_info": false, 00:14:55.250 "zone_management": false, 00:14:55.250 "zone_append": false, 00:14:55.250 "compare": false, 00:14:55.250 "compare_and_write": false, 00:14:55.250 "abort": false, 00:14:55.250 "seek_hole": false, 00:14:55.250 "seek_data": false, 00:14:55.250 "copy": false, 00:14:55.250 "nvme_iov_md": false 00:14:55.250 }, 00:14:55.250 "driver_specific": { 00:14:55.250 "raid": { 00:14:55.250 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:55.250 "strip_size_kb": 64, 00:14:55.250 "state": "online", 00:14:55.250 "raid_level": "raid5f", 00:14:55.250 "superblock": true, 00:14:55.251 "num_base_bdevs": 4, 00:14:55.251 "num_base_bdevs_discovered": 4, 00:14:55.251 "num_base_bdevs_operational": 4, 00:14:55.251 "base_bdevs_list": [ 00:14:55.251 { 00:14:55.251 "name": "pt1", 00:14:55.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:55.251 "is_configured": true, 00:14:55.251 "data_offset": 2048, 00:14:55.251 "data_size": 63488 00:14:55.251 }, 00:14:55.251 { 00:14:55.251 "name": "pt2", 00:14:55.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:55.251 "is_configured": true, 00:14:55.251 "data_offset": 2048, 00:14:55.251 "data_size": 63488 00:14:55.251 }, 00:14:55.251 { 00:14:55.251 "name": "pt3", 00:14:55.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:55.251 "is_configured": true, 00:14:55.251 "data_offset": 2048, 00:14:55.251 "data_size": 63488 00:14:55.251 }, 00:14:55.251 { 00:14:55.251 "name": "pt4", 00:14:55.251 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:55.251 "is_configured": true, 00:14:55.251 "data_offset": 2048, 00:14:55.251 "data_size": 63488 00:14:55.251 } 00:14:55.251 ] 00:14:55.251 } 00:14:55.251 } 00:14:55.251 }' 00:14:55.251 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.251 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:55.251 pt2 00:14:55.251 pt3 00:14:55.251 pt4' 00:14:55.251 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.510 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.510 [2024-10-09 01:34:54.386269] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d12895e7-1cb5-4e8e-995b-79312e6a1187 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d12895e7-1cb5-4e8e-995b-79312e6a1187 ']' 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 [2024-10-09 01:34:54.426151] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.770 [2024-10-09 01:34:54.426211] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.770 [2024-10-09 01:34:54.426307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.770 [2024-10-09 01:34:54.426391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.770 [2024-10-09 01:34:54.426424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:55.770 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.771 [2024-10-09 01:34:54.590209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:55.771 [2024-10-09 01:34:54.592230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:55.771 [2024-10-09 01:34:54.592308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:55.771 [2024-10-09 01:34:54.592353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:55.771 [2024-10-09 01:34:54.592410] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:55.771 [2024-10-09 01:34:54.592472] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:55.771 [2024-10-09 01:34:54.592536] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:55.771 [2024-10-09 01:34:54.592614] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:55.771 [2024-10-09 01:34:54.592628] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.771 [2024-10-09 01:34:54.592639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:55.771 request: 00:14:55.771 { 00:14:55.771 "name": "raid_bdev1", 00:14:55.771 "raid_level": "raid5f", 00:14:55.771 "base_bdevs": [ 00:14:55.771 "malloc1", 00:14:55.771 "malloc2", 00:14:55.771 "malloc3", 00:14:55.771 "malloc4" 00:14:55.771 ], 00:14:55.771 "strip_size_kb": 64, 00:14:55.771 "superblock": false, 00:14:55.771 "method": "bdev_raid_create", 00:14:55.771 "req_id": 1 00:14:55.771 } 00:14:55.771 Got JSON-RPC error response 00:14:55.771 response: 00:14:55.771 { 00:14:55.771 "code": -17, 00:14:55.771 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:55.771 } 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.771 [2024-10-09 01:34:54.654208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:55.771 [2024-10-09 01:34:54.654295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.771 [2024-10-09 01:34:54.654323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:55.771 [2024-10-09 01:34:54.654352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.771 [2024-10-09 01:34:54.656573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.771 [2024-10-09 01:34:54.656639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:55.771 [2024-10-09 01:34:54.656736] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:55.771 [2024-10-09 01:34:54.656803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:55.771 pt1 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.771 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.031 "name": "raid_bdev1", 00:14:56.031 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:56.031 "strip_size_kb": 64, 00:14:56.031 "state": "configuring", 00:14:56.031 "raid_level": "raid5f", 00:14:56.031 "superblock": true, 00:14:56.031 "num_base_bdevs": 4, 00:14:56.031 "num_base_bdevs_discovered": 1, 00:14:56.031 "num_base_bdevs_operational": 4, 00:14:56.031 "base_bdevs_list": [ 00:14:56.031 { 00:14:56.031 "name": "pt1", 00:14:56.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.031 "is_configured": true, 00:14:56.031 "data_offset": 2048, 00:14:56.031 "data_size": 63488 00:14:56.031 }, 00:14:56.031 { 00:14:56.031 "name": null, 00:14:56.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.031 "is_configured": false, 00:14:56.031 "data_offset": 2048, 00:14:56.031 "data_size": 63488 00:14:56.031 }, 00:14:56.031 { 00:14:56.031 "name": null, 00:14:56.031 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.031 "is_configured": false, 00:14:56.031 "data_offset": 2048, 00:14:56.031 "data_size": 63488 00:14:56.031 }, 00:14:56.031 { 00:14:56.031 "name": null, 00:14:56.031 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:56.031 "is_configured": false, 00:14:56.031 "data_offset": 2048, 00:14:56.031 "data_size": 63488 00:14:56.031 } 00:14:56.031 ] 00:14:56.031 }' 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.031 01:34:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.290 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:56.290 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.290 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.290 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.290 [2024-10-09 01:34:55.094286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.290 [2024-10-09 01:34:55.094331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.290 [2024-10-09 01:34:55.094343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:56.290 [2024-10-09 01:34:55.094352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.290 [2024-10-09 01:34:55.094668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.290 [2024-10-09 01:34:55.094687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.290 [2024-10-09 01:34:55.094733] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:56.290 [2024-10-09 01:34:55.094754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.290 pt2 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 [2024-10-09 01:34:55.106322] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.291 "name": "raid_bdev1", 00:14:56.291 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:56.291 "strip_size_kb": 64, 00:14:56.291 "state": "configuring", 00:14:56.291 "raid_level": "raid5f", 00:14:56.291 "superblock": true, 00:14:56.291 "num_base_bdevs": 4, 00:14:56.291 "num_base_bdevs_discovered": 1, 00:14:56.291 "num_base_bdevs_operational": 4, 00:14:56.291 "base_bdevs_list": [ 00:14:56.291 { 00:14:56.291 "name": "pt1", 00:14:56.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.291 "is_configured": true, 00:14:56.291 "data_offset": 2048, 00:14:56.291 "data_size": 63488 00:14:56.291 }, 00:14:56.291 { 00:14:56.291 "name": null, 00:14:56.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.291 "is_configured": false, 00:14:56.291 "data_offset": 0, 00:14:56.291 "data_size": 63488 00:14:56.291 }, 00:14:56.291 { 00:14:56.291 "name": null, 00:14:56.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.291 "is_configured": false, 00:14:56.291 "data_offset": 2048, 00:14:56.291 "data_size": 63488 00:14:56.291 }, 00:14:56.291 { 00:14:56.291 "name": null, 00:14:56.291 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:56.291 "is_configured": false, 00:14:56.291 "data_offset": 2048, 00:14:56.291 "data_size": 63488 00:14:56.291 } 00:14:56.291 ] 00:14:56.291 }' 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.291 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.860 [2024-10-09 01:34:55.550437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.860 [2024-10-09 01:34:55.550538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.860 [2024-10-09 01:34:55.550570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:56.860 [2024-10-09 01:34:55.550597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.860 [2024-10-09 01:34:55.550919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.860 [2024-10-09 01:34:55.550973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.860 [2024-10-09 01:34:55.551047] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:56.860 [2024-10-09 01:34:55.551089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.860 pt2 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.860 [2024-10-09 01:34:55.562425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:56.860 [2024-10-09 01:34:55.562509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.860 [2024-10-09 01:34:55.562554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:56.860 [2024-10-09 01:34:55.562580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.860 [2024-10-09 01:34:55.562903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.860 [2024-10-09 01:34:55.562966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:56.860 [2024-10-09 01:34:55.563039] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:56.860 [2024-10-09 01:34:55.563082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:56.860 pt3 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.860 [2024-10-09 01:34:55.574424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:56.860 [2024-10-09 01:34:55.574506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.860 [2024-10-09 01:34:55.574551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:56.860 [2024-10-09 01:34:55.574577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.860 [2024-10-09 01:34:55.574884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.860 [2024-10-09 01:34:55.574945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:56.860 [2024-10-09 01:34:55.575022] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:56.860 [2024-10-09 01:34:55.575042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:56.860 [2024-10-09 01:34:55.575144] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:56.860 [2024-10-09 01:34:55.575152] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:56.860 [2024-10-09 01:34:55.575403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:56.860 [2024-10-09 01:34:55.575882] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:56.860 [2024-10-09 01:34:55.575897] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:56.860 [2024-10-09 01:34:55.575980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.860 pt4 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.860 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.860 "name": "raid_bdev1", 00:14:56.861 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:56.861 "strip_size_kb": 64, 00:14:56.861 "state": "online", 00:14:56.861 "raid_level": "raid5f", 00:14:56.861 "superblock": true, 00:14:56.861 "num_base_bdevs": 4, 00:14:56.861 "num_base_bdevs_discovered": 4, 00:14:56.861 "num_base_bdevs_operational": 4, 00:14:56.861 "base_bdevs_list": [ 00:14:56.861 { 00:14:56.861 "name": "pt1", 00:14:56.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.861 "is_configured": true, 00:14:56.861 "data_offset": 2048, 00:14:56.861 "data_size": 63488 00:14:56.861 }, 00:14:56.861 { 00:14:56.861 "name": "pt2", 00:14:56.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.861 "is_configured": true, 00:14:56.861 "data_offset": 2048, 00:14:56.861 "data_size": 63488 00:14:56.861 }, 00:14:56.861 { 00:14:56.861 "name": "pt3", 00:14:56.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.861 "is_configured": true, 00:14:56.861 "data_offset": 2048, 00:14:56.861 "data_size": 63488 00:14:56.861 }, 00:14:56.861 { 00:14:56.861 "name": "pt4", 00:14:56.861 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:56.861 "is_configured": true, 00:14:56.861 "data_offset": 2048, 00:14:56.861 "data_size": 63488 00:14:56.861 } 00:14:56.861 ] 00:14:56.861 }' 00:14:56.861 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.861 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 [2024-10-09 01:34:55.966703] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.120 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.120 "name": "raid_bdev1", 00:14:57.120 "aliases": [ 00:14:57.120 "d12895e7-1cb5-4e8e-995b-79312e6a1187" 00:14:57.120 ], 00:14:57.120 "product_name": "Raid Volume", 00:14:57.120 "block_size": 512, 00:14:57.120 "num_blocks": 190464, 00:14:57.120 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:57.120 "assigned_rate_limits": { 00:14:57.120 "rw_ios_per_sec": 0, 00:14:57.120 "rw_mbytes_per_sec": 0, 00:14:57.120 "r_mbytes_per_sec": 0, 00:14:57.120 "w_mbytes_per_sec": 0 00:14:57.120 }, 00:14:57.120 "claimed": false, 00:14:57.120 "zoned": false, 00:14:57.120 "supported_io_types": { 00:14:57.120 "read": true, 00:14:57.121 "write": true, 00:14:57.121 "unmap": false, 00:14:57.121 "flush": false, 00:14:57.121 "reset": true, 00:14:57.121 "nvme_admin": false, 00:14:57.121 "nvme_io": false, 00:14:57.121 "nvme_io_md": false, 00:14:57.121 "write_zeroes": true, 00:14:57.121 "zcopy": false, 00:14:57.121 "get_zone_info": false, 00:14:57.121 "zone_management": false, 00:14:57.121 "zone_append": false, 00:14:57.121 "compare": false, 00:14:57.121 "compare_and_write": false, 00:14:57.121 "abort": false, 00:14:57.121 "seek_hole": false, 00:14:57.121 "seek_data": false, 00:14:57.121 "copy": false, 00:14:57.121 "nvme_iov_md": false 00:14:57.121 }, 00:14:57.121 "driver_specific": { 00:14:57.121 "raid": { 00:14:57.121 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:57.121 "strip_size_kb": 64, 00:14:57.121 "state": "online", 00:14:57.121 "raid_level": "raid5f", 00:14:57.121 "superblock": true, 00:14:57.121 "num_base_bdevs": 4, 00:14:57.121 "num_base_bdevs_discovered": 4, 00:14:57.121 "num_base_bdevs_operational": 4, 00:14:57.121 "base_bdevs_list": [ 00:14:57.121 { 00:14:57.121 "name": "pt1", 00:14:57.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.121 "is_configured": true, 00:14:57.121 "data_offset": 2048, 00:14:57.121 "data_size": 63488 00:14:57.121 }, 00:14:57.121 { 00:14:57.121 "name": "pt2", 00:14:57.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.121 "is_configured": true, 00:14:57.121 "data_offset": 2048, 00:14:57.121 "data_size": 63488 00:14:57.121 }, 00:14:57.121 { 00:14:57.121 "name": "pt3", 00:14:57.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.121 "is_configured": true, 00:14:57.121 "data_offset": 2048, 00:14:57.121 "data_size": 63488 00:14:57.121 }, 00:14:57.121 { 00:14:57.121 "name": "pt4", 00:14:57.121 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.121 "is_configured": true, 00:14:57.121 "data_offset": 2048, 00:14:57.121 "data_size": 63488 00:14:57.121 } 00:14:57.121 ] 00:14:57.121 } 00:14:57.121 } 00:14:57.121 }' 00:14:57.121 01:34:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:57.380 pt2 00:14:57.380 pt3 00:14:57.380 pt4' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.380 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.380 [2024-10-09 01:34:56.258730] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.639 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.639 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d12895e7-1cb5-4e8e-995b-79312e6a1187 '!=' d12895e7-1cb5-4e8e-995b-79312e6a1187 ']' 00:14:57.639 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:57.639 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:57.639 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:57.639 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:57.639 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.639 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.640 [2024-10-09 01:34:56.306657] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.640 "name": "raid_bdev1", 00:14:57.640 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:57.640 "strip_size_kb": 64, 00:14:57.640 "state": "online", 00:14:57.640 "raid_level": "raid5f", 00:14:57.640 "superblock": true, 00:14:57.640 "num_base_bdevs": 4, 00:14:57.640 "num_base_bdevs_discovered": 3, 00:14:57.640 "num_base_bdevs_operational": 3, 00:14:57.640 "base_bdevs_list": [ 00:14:57.640 { 00:14:57.640 "name": null, 00:14:57.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.640 "is_configured": false, 00:14:57.640 "data_offset": 0, 00:14:57.640 "data_size": 63488 00:14:57.640 }, 00:14:57.640 { 00:14:57.640 "name": "pt2", 00:14:57.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.640 "is_configured": true, 00:14:57.640 "data_offset": 2048, 00:14:57.640 "data_size": 63488 00:14:57.640 }, 00:14:57.640 { 00:14:57.640 "name": "pt3", 00:14:57.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.640 "is_configured": true, 00:14:57.640 "data_offset": 2048, 00:14:57.640 "data_size": 63488 00:14:57.640 }, 00:14:57.640 { 00:14:57.640 "name": "pt4", 00:14:57.640 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.640 "is_configured": true, 00:14:57.640 "data_offset": 2048, 00:14:57.640 "data_size": 63488 00:14:57.640 } 00:14:57.640 ] 00:14:57.640 }' 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.640 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.899 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.900 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.900 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.900 [2024-10-09 01:34:56.778722] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.900 [2024-10-09 01:34:56.778788] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.900 [2024-10-09 01:34:56.778854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.900 [2024-10-09 01:34:56.778925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.900 [2024-10-09 01:34:56.778955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:57.900 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.900 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.900 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.900 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:57.900 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.160 [2024-10-09 01:34:56.878760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.160 [2024-10-09 01:34:56.878807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.160 [2024-10-09 01:34:56.878824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:58.160 [2024-10-09 01:34:56.878832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.160 [2024-10-09 01:34:56.881274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.160 [2024-10-09 01:34:56.881345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.160 [2024-10-09 01:34:56.881421] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:58.160 [2024-10-09 01:34:56.881466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.160 pt2 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.160 "name": "raid_bdev1", 00:14:58.160 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:58.160 "strip_size_kb": 64, 00:14:58.160 "state": "configuring", 00:14:58.160 "raid_level": "raid5f", 00:14:58.160 "superblock": true, 00:14:58.160 "num_base_bdevs": 4, 00:14:58.160 "num_base_bdevs_discovered": 1, 00:14:58.160 "num_base_bdevs_operational": 3, 00:14:58.160 "base_bdevs_list": [ 00:14:58.160 { 00:14:58.160 "name": null, 00:14:58.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.160 "is_configured": false, 00:14:58.160 "data_offset": 2048, 00:14:58.160 "data_size": 63488 00:14:58.160 }, 00:14:58.160 { 00:14:58.160 "name": "pt2", 00:14:58.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.160 "is_configured": true, 00:14:58.160 "data_offset": 2048, 00:14:58.160 "data_size": 63488 00:14:58.160 }, 00:14:58.160 { 00:14:58.160 "name": null, 00:14:58.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.160 "is_configured": false, 00:14:58.160 "data_offset": 2048, 00:14:58.160 "data_size": 63488 00:14:58.160 }, 00:14:58.160 { 00:14:58.160 "name": null, 00:14:58.160 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.160 "is_configured": false, 00:14:58.160 "data_offset": 2048, 00:14:58.160 "data_size": 63488 00:14:58.160 } 00:14:58.160 ] 00:14:58.160 }' 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.160 01:34:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.420 [2024-10-09 01:34:57.294864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:58.420 [2024-10-09 01:34:57.294906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.420 [2024-10-09 01:34:57.294921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:58.420 [2024-10-09 01:34:57.294928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.420 [2024-10-09 01:34:57.295220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.420 [2024-10-09 01:34:57.295234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:58.420 [2024-10-09 01:34:57.295284] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:58.420 [2024-10-09 01:34:57.295307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:58.420 pt3 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.420 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.680 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.680 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.680 "name": "raid_bdev1", 00:14:58.680 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:58.680 "strip_size_kb": 64, 00:14:58.680 "state": "configuring", 00:14:58.680 "raid_level": "raid5f", 00:14:58.680 "superblock": true, 00:14:58.680 "num_base_bdevs": 4, 00:14:58.680 "num_base_bdevs_discovered": 2, 00:14:58.680 "num_base_bdevs_operational": 3, 00:14:58.680 "base_bdevs_list": [ 00:14:58.680 { 00:14:58.680 "name": null, 00:14:58.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.680 "is_configured": false, 00:14:58.680 "data_offset": 2048, 00:14:58.680 "data_size": 63488 00:14:58.680 }, 00:14:58.680 { 00:14:58.680 "name": "pt2", 00:14:58.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.680 "is_configured": true, 00:14:58.680 "data_offset": 2048, 00:14:58.680 "data_size": 63488 00:14:58.680 }, 00:14:58.680 { 00:14:58.680 "name": "pt3", 00:14:58.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.680 "is_configured": true, 00:14:58.680 "data_offset": 2048, 00:14:58.680 "data_size": 63488 00:14:58.680 }, 00:14:58.680 { 00:14:58.680 "name": null, 00:14:58.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.680 "is_configured": false, 00:14:58.680 "data_offset": 2048, 00:14:58.680 "data_size": 63488 00:14:58.680 } 00:14:58.680 ] 00:14:58.680 }' 00:14:58.680 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.680 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.940 [2024-10-09 01:34:57.706966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:58.940 [2024-10-09 01:34:57.707064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.940 [2024-10-09 01:34:57.707103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:58.940 [2024-10-09 01:34:57.707130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.940 [2024-10-09 01:34:57.707439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.940 [2024-10-09 01:34:57.707494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:58.940 [2024-10-09 01:34:57.707582] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:58.940 [2024-10-09 01:34:57.707627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:58.940 [2024-10-09 01:34:57.707735] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:58.940 [2024-10-09 01:34:57.707767] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:58.940 [2024-10-09 01:34:57.708014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:58.940 [2024-10-09 01:34:57.708597] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:58.940 [2024-10-09 01:34:57.708649] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:58.940 [2024-10-09 01:34:57.708901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.940 pt4 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.940 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.940 "name": "raid_bdev1", 00:14:58.940 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:58.940 "strip_size_kb": 64, 00:14:58.940 "state": "online", 00:14:58.940 "raid_level": "raid5f", 00:14:58.940 "superblock": true, 00:14:58.940 "num_base_bdevs": 4, 00:14:58.940 "num_base_bdevs_discovered": 3, 00:14:58.940 "num_base_bdevs_operational": 3, 00:14:58.940 "base_bdevs_list": [ 00:14:58.940 { 00:14:58.940 "name": null, 00:14:58.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.940 "is_configured": false, 00:14:58.940 "data_offset": 2048, 00:14:58.940 "data_size": 63488 00:14:58.940 }, 00:14:58.940 { 00:14:58.940 "name": "pt2", 00:14:58.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.940 "is_configured": true, 00:14:58.940 "data_offset": 2048, 00:14:58.940 "data_size": 63488 00:14:58.940 }, 00:14:58.940 { 00:14:58.940 "name": "pt3", 00:14:58.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.940 "is_configured": true, 00:14:58.940 "data_offset": 2048, 00:14:58.940 "data_size": 63488 00:14:58.940 }, 00:14:58.940 { 00:14:58.940 "name": "pt4", 00:14:58.940 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.940 "is_configured": true, 00:14:58.940 "data_offset": 2048, 00:14:58.940 "data_size": 63488 00:14:58.940 } 00:14:58.940 ] 00:14:58.940 }' 00:14:58.941 01:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.941 01:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.510 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:59.510 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.510 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.510 [2024-10-09 01:34:58.155270] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.510 [2024-10-09 01:34:58.155293] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.510 [2024-10-09 01:34:58.155347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.510 [2024-10-09 01:34:58.155405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.511 [2024-10-09 01:34:58.155416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.511 [2024-10-09 01:34:58.231315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:59.511 [2024-10-09 01:34:58.231367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.511 [2024-10-09 01:34:58.231381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:59.511 [2024-10-09 01:34:58.231391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.511 [2024-10-09 01:34:58.233788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.511 [2024-10-09 01:34:58.233826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:59.511 [2024-10-09 01:34:58.233876] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:59.511 [2024-10-09 01:34:58.233916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:59.511 [2024-10-09 01:34:58.233999] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:59.511 [2024-10-09 01:34:58.234013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:59.511 [2024-10-09 01:34:58.234028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:59.511 [2024-10-09 01:34:58.234073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:59.511 [2024-10-09 01:34:58.234151] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:59.511 pt1 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.511 "name": "raid_bdev1", 00:14:59.511 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:14:59.511 "strip_size_kb": 64, 00:14:59.511 "state": "configuring", 00:14:59.511 "raid_level": "raid5f", 00:14:59.511 "superblock": true, 00:14:59.511 "num_base_bdevs": 4, 00:14:59.511 "num_base_bdevs_discovered": 2, 00:14:59.511 "num_base_bdevs_operational": 3, 00:14:59.511 "base_bdevs_list": [ 00:14:59.511 { 00:14:59.511 "name": null, 00:14:59.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.511 "is_configured": false, 00:14:59.511 "data_offset": 2048, 00:14:59.511 "data_size": 63488 00:14:59.511 }, 00:14:59.511 { 00:14:59.511 "name": "pt2", 00:14:59.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.511 "is_configured": true, 00:14:59.511 "data_offset": 2048, 00:14:59.511 "data_size": 63488 00:14:59.511 }, 00:14:59.511 { 00:14:59.511 "name": "pt3", 00:14:59.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.511 "is_configured": true, 00:14:59.511 "data_offset": 2048, 00:14:59.511 "data_size": 63488 00:14:59.511 }, 00:14:59.511 { 00:14:59.511 "name": null, 00:14:59.511 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:59.511 "is_configured": false, 00:14:59.511 "data_offset": 2048, 00:14:59.511 "data_size": 63488 00:14:59.511 } 00:14:59.511 ] 00:14:59.511 }' 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.511 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.081 [2024-10-09 01:34:58.707426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:00.081 [2024-10-09 01:34:58.707511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.081 [2024-10-09 01:34:58.707556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:00.081 [2024-10-09 01:34:58.707584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.081 [2024-10-09 01:34:58.707914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.081 [2024-10-09 01:34:58.707969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:00.081 [2024-10-09 01:34:58.708041] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:00.081 [2024-10-09 01:34:58.708082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:00.081 [2024-10-09 01:34:58.708180] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:00.081 [2024-10-09 01:34:58.708214] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:00.081 [2024-10-09 01:34:58.708456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:00.081 [2024-10-09 01:34:58.709027] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:00.081 [2024-10-09 01:34:58.709083] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:00.081 [2024-10-09 01:34:58.709294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.081 pt4 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.081 "name": "raid_bdev1", 00:15:00.081 "uuid": "d12895e7-1cb5-4e8e-995b-79312e6a1187", 00:15:00.081 "strip_size_kb": 64, 00:15:00.081 "state": "online", 00:15:00.081 "raid_level": "raid5f", 00:15:00.081 "superblock": true, 00:15:00.081 "num_base_bdevs": 4, 00:15:00.081 "num_base_bdevs_discovered": 3, 00:15:00.081 "num_base_bdevs_operational": 3, 00:15:00.081 "base_bdevs_list": [ 00:15:00.081 { 00:15:00.081 "name": null, 00:15:00.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.081 "is_configured": false, 00:15:00.081 "data_offset": 2048, 00:15:00.081 "data_size": 63488 00:15:00.081 }, 00:15:00.081 { 00:15:00.081 "name": "pt2", 00:15:00.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.081 "is_configured": true, 00:15:00.081 "data_offset": 2048, 00:15:00.081 "data_size": 63488 00:15:00.081 }, 00:15:00.081 { 00:15:00.081 "name": "pt3", 00:15:00.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.081 "is_configured": true, 00:15:00.081 "data_offset": 2048, 00:15:00.081 "data_size": 63488 00:15:00.081 }, 00:15:00.081 { 00:15:00.081 "name": "pt4", 00:15:00.081 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.081 "is_configured": true, 00:15:00.081 "data_offset": 2048, 00:15:00.081 "data_size": 63488 00:15:00.081 } 00:15:00.081 ] 00:15:00.081 }' 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.081 01:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.341 [2024-10-09 01:34:59.191713] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d12895e7-1cb5-4e8e-995b-79312e6a1187 '!=' d12895e7-1cb5-4e8e-995b-79312e6a1187 ']' 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 95708 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 95708 ']' 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 95708 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.341 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95708 00:15:00.601 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:00.601 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:00.601 killing process with pid 95708 00:15:00.601 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95708' 00:15:00.601 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 95708 00:15:00.601 [2024-10-09 01:34:59.256228] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:00.601 [2024-10-09 01:34:59.256289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.601 [2024-10-09 01:34:59.256348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.601 [2024-10-09 01:34:59.256358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:00.601 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 95708 00:15:00.601 [2024-10-09 01:34:59.335563] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.861 01:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:00.861 00:15:00.861 real 0m7.149s 00:15:00.861 user 0m11.756s 00:15:00.861 sys 0m1.563s 00:15:00.861 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.861 01:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.861 ************************************ 00:15:00.861 END TEST raid5f_superblock_test 00:15:00.861 ************************************ 00:15:01.120 01:34:59 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:01.120 01:34:59 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:01.120 01:34:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:01.120 01:34:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.120 01:34:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.120 ************************************ 00:15:01.120 START TEST raid5f_rebuild_test 00:15:01.120 ************************************ 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:01.120 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=96182 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 96182 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 96182 ']' 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.121 01:34:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.121 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:01.121 Zero copy mechanism will not be used. 00:15:01.121 [2024-10-09 01:34:59.895162] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:01.121 [2024-10-09 01:34:59.895274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96182 ] 00:15:01.380 [2024-10-09 01:35:00.026995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:01.380 [2024-10-09 01:35:00.056746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.380 [2024-10-09 01:35:00.128365] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.380 [2024-10-09 01:35:00.204323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.380 [2024-10-09 01:35:00.204370] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 BaseBdev1_malloc 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 [2024-10-09 01:35:00.735832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:01.950 [2024-10-09 01:35:00.735906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.950 [2024-10-09 01:35:00.735935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.950 [2024-10-09 01:35:00.735953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.950 [2024-10-09 01:35:00.738363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.950 [2024-10-09 01:35:00.738403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.950 BaseBdev1 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 BaseBdev2_malloc 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 [2024-10-09 01:35:00.789540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:01.950 [2024-10-09 01:35:00.789759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.950 [2024-10-09 01:35:00.789808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.950 [2024-10-09 01:35:00.789833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.950 [2024-10-09 01:35:00.794629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.950 [2024-10-09 01:35:00.794732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:01.950 BaseBdev2 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 BaseBdev3_malloc 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.950 [2024-10-09 01:35:00.826808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:01.950 [2024-10-09 01:35:00.826862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.950 [2024-10-09 01:35:00.826883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:01.950 [2024-10-09 01:35:00.826894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.950 [2024-10-09 01:35:00.829177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.950 [2024-10-09 01:35:00.829270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:01.950 BaseBdev3 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.950 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.210 BaseBdev4_malloc 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.210 [2024-10-09 01:35:00.861335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:02.210 [2024-10-09 01:35:00.861399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.210 [2024-10-09 01:35:00.861418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:02.210 [2024-10-09 01:35:00.861429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.210 [2024-10-09 01:35:00.863791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.210 [2024-10-09 01:35:00.863877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:02.210 BaseBdev4 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.210 spare_malloc 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.210 spare_delay 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.210 [2024-10-09 01:35:00.907937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.210 [2024-10-09 01:35:00.907992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.210 [2024-10-09 01:35:00.908009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:02.210 [2024-10-09 01:35:00.908020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.210 [2024-10-09 01:35:00.910415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.210 [2024-10-09 01:35:00.910451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.210 spare 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.210 [2024-10-09 01:35:00.920033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.210 [2024-10-09 01:35:00.922132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.210 [2024-10-09 01:35:00.922191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.210 [2024-10-09 01:35:00.922231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:02.210 [2024-10-09 01:35:00.922311] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:02.210 [2024-10-09 01:35:00.922329] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:02.210 [2024-10-09 01:35:00.922589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:02.210 [2024-10-09 01:35:00.923053] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:02.210 [2024-10-09 01:35:00.923064] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:02.210 [2024-10-09 01:35:00.923185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.210 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.211 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.211 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.211 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.211 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.211 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.211 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.211 "name": "raid_bdev1", 00:15:02.211 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:02.211 "strip_size_kb": 64, 00:15:02.211 "state": "online", 00:15:02.211 "raid_level": "raid5f", 00:15:02.211 "superblock": false, 00:15:02.211 "num_base_bdevs": 4, 00:15:02.211 "num_base_bdevs_discovered": 4, 00:15:02.211 "num_base_bdevs_operational": 4, 00:15:02.211 "base_bdevs_list": [ 00:15:02.211 { 00:15:02.211 "name": "BaseBdev1", 00:15:02.211 "uuid": "ddcf4bbd-0073-5cb6-aaac-283a19cf01bd", 00:15:02.211 "is_configured": true, 00:15:02.211 "data_offset": 0, 00:15:02.211 "data_size": 65536 00:15:02.211 }, 00:15:02.211 { 00:15:02.211 "name": "BaseBdev2", 00:15:02.211 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:02.211 "is_configured": true, 00:15:02.211 "data_offset": 0, 00:15:02.211 "data_size": 65536 00:15:02.211 }, 00:15:02.211 { 00:15:02.211 "name": "BaseBdev3", 00:15:02.211 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:02.211 "is_configured": true, 00:15:02.211 "data_offset": 0, 00:15:02.211 "data_size": 65536 00:15:02.211 }, 00:15:02.211 { 00:15:02.211 "name": "BaseBdev4", 00:15:02.211 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:02.211 "is_configured": true, 00:15:02.211 "data_offset": 0, 00:15:02.211 "data_size": 65536 00:15:02.211 } 00:15:02.211 ] 00:15:02.211 }' 00:15:02.211 01:35:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.211 01:35:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:02.780 [2024-10-09 01:35:01.390452] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.780 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:02.780 [2024-10-09 01:35:01.662377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:03.040 /dev/nbd0 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.040 1+0 records in 00:15:03.040 1+0 records out 00:15:03.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314044 s, 13.0 MB/s 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:03.040 01:35:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:03.611 512+0 records in 00:15:03.611 512+0 records out 00:15:03.611 100663296 bytes (101 MB, 96 MiB) copied, 0.672923 s, 150 MB/s 00:15:03.611 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:03.611 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.611 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:03.611 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.611 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:03.611 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.611 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:03.883 [2024-10-09 01:35:02.622850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.883 [2024-10-09 01:35:02.633626] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.883 "name": "raid_bdev1", 00:15:03.883 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:03.883 "strip_size_kb": 64, 00:15:03.883 "state": "online", 00:15:03.883 "raid_level": "raid5f", 00:15:03.883 "superblock": false, 00:15:03.883 "num_base_bdevs": 4, 00:15:03.883 "num_base_bdevs_discovered": 3, 00:15:03.883 "num_base_bdevs_operational": 3, 00:15:03.883 "base_bdevs_list": [ 00:15:03.883 { 00:15:03.883 "name": null, 00:15:03.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.883 "is_configured": false, 00:15:03.883 "data_offset": 0, 00:15:03.883 "data_size": 65536 00:15:03.883 }, 00:15:03.883 { 00:15:03.883 "name": "BaseBdev2", 00:15:03.883 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:03.883 "is_configured": true, 00:15:03.883 "data_offset": 0, 00:15:03.883 "data_size": 65536 00:15:03.883 }, 00:15:03.883 { 00:15:03.883 "name": "BaseBdev3", 00:15:03.883 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:03.883 "is_configured": true, 00:15:03.883 "data_offset": 0, 00:15:03.883 "data_size": 65536 00:15:03.883 }, 00:15:03.883 { 00:15:03.883 "name": "BaseBdev4", 00:15:03.883 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:03.883 "is_configured": true, 00:15:03.883 "data_offset": 0, 00:15:03.883 "data_size": 65536 00:15:03.883 } 00:15:03.883 ] 00:15:03.883 }' 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.883 01:35:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.467 01:35:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.467 01:35:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.467 01:35:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.467 [2024-10-09 01:35:03.069752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.467 [2024-10-09 01:35:03.075600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:15:04.467 01:35:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.467 01:35:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:04.467 [2024-10-09 01:35:03.078157] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.407 "name": "raid_bdev1", 00:15:05.407 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:05.407 "strip_size_kb": 64, 00:15:05.407 "state": "online", 00:15:05.407 "raid_level": "raid5f", 00:15:05.407 "superblock": false, 00:15:05.407 "num_base_bdevs": 4, 00:15:05.407 "num_base_bdevs_discovered": 4, 00:15:05.407 "num_base_bdevs_operational": 4, 00:15:05.407 "process": { 00:15:05.407 "type": "rebuild", 00:15:05.407 "target": "spare", 00:15:05.407 "progress": { 00:15:05.407 "blocks": 19200, 00:15:05.407 "percent": 9 00:15:05.407 } 00:15:05.407 }, 00:15:05.407 "base_bdevs_list": [ 00:15:05.407 { 00:15:05.407 "name": "spare", 00:15:05.407 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:05.407 "is_configured": true, 00:15:05.407 "data_offset": 0, 00:15:05.407 "data_size": 65536 00:15:05.407 }, 00:15:05.407 { 00:15:05.407 "name": "BaseBdev2", 00:15:05.407 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:05.407 "is_configured": true, 00:15:05.407 "data_offset": 0, 00:15:05.407 "data_size": 65536 00:15:05.407 }, 00:15:05.407 { 00:15:05.407 "name": "BaseBdev3", 00:15:05.407 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:05.407 "is_configured": true, 00:15:05.407 "data_offset": 0, 00:15:05.407 "data_size": 65536 00:15:05.407 }, 00:15:05.407 { 00:15:05.407 "name": "BaseBdev4", 00:15:05.407 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:05.407 "is_configured": true, 00:15:05.407 "data_offset": 0, 00:15:05.407 "data_size": 65536 00:15:05.407 } 00:15:05.407 ] 00:15:05.407 }' 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.407 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.407 [2024-10-09 01:35:04.232212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.407 [2024-10-09 01:35:04.286796] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.407 [2024-10-09 01:35:04.286853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.407 [2024-10-09 01:35:04.286869] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.407 [2024-10-09 01:35:04.286883] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.667 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.667 "name": "raid_bdev1", 00:15:05.667 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:05.667 "strip_size_kb": 64, 00:15:05.667 "state": "online", 00:15:05.667 "raid_level": "raid5f", 00:15:05.667 "superblock": false, 00:15:05.667 "num_base_bdevs": 4, 00:15:05.667 "num_base_bdevs_discovered": 3, 00:15:05.667 "num_base_bdevs_operational": 3, 00:15:05.667 "base_bdevs_list": [ 00:15:05.667 { 00:15:05.667 "name": null, 00:15:05.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.667 "is_configured": false, 00:15:05.667 "data_offset": 0, 00:15:05.667 "data_size": 65536 00:15:05.667 }, 00:15:05.667 { 00:15:05.667 "name": "BaseBdev2", 00:15:05.667 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:05.667 "is_configured": true, 00:15:05.667 "data_offset": 0, 00:15:05.667 "data_size": 65536 00:15:05.668 }, 00:15:05.668 { 00:15:05.668 "name": "BaseBdev3", 00:15:05.668 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:05.668 "is_configured": true, 00:15:05.668 "data_offset": 0, 00:15:05.668 "data_size": 65536 00:15:05.668 }, 00:15:05.668 { 00:15:05.668 "name": "BaseBdev4", 00:15:05.668 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:05.668 "is_configured": true, 00:15:05.668 "data_offset": 0, 00:15:05.668 "data_size": 65536 00:15:05.668 } 00:15:05.668 ] 00:15:05.668 }' 00:15:05.668 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.668 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.927 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.927 "name": "raid_bdev1", 00:15:05.927 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:05.927 "strip_size_kb": 64, 00:15:05.927 "state": "online", 00:15:05.927 "raid_level": "raid5f", 00:15:05.927 "superblock": false, 00:15:05.927 "num_base_bdevs": 4, 00:15:05.927 "num_base_bdevs_discovered": 3, 00:15:05.927 "num_base_bdevs_operational": 3, 00:15:05.927 "base_bdevs_list": [ 00:15:05.927 { 00:15:05.927 "name": null, 00:15:05.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.927 "is_configured": false, 00:15:05.927 "data_offset": 0, 00:15:05.927 "data_size": 65536 00:15:05.927 }, 00:15:05.927 { 00:15:05.927 "name": "BaseBdev2", 00:15:05.927 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:05.927 "is_configured": true, 00:15:05.927 "data_offset": 0, 00:15:05.927 "data_size": 65536 00:15:05.927 }, 00:15:05.927 { 00:15:05.927 "name": "BaseBdev3", 00:15:05.927 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:05.927 "is_configured": true, 00:15:05.928 "data_offset": 0, 00:15:05.928 "data_size": 65536 00:15:05.928 }, 00:15:05.928 { 00:15:05.928 "name": "BaseBdev4", 00:15:05.928 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:05.928 "is_configured": true, 00:15:05.928 "data_offset": 0, 00:15:05.928 "data_size": 65536 00:15:05.928 } 00:15:05.928 ] 00:15:05.928 }' 00:15:05.928 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.928 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.928 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.187 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.187 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.187 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.187 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.187 [2024-10-09 01:35:04.871189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.187 [2024-10-09 01:35:04.875253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b8f0 00:15:06.187 01:35:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.187 01:35:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.187 [2024-10-09 01:35:04.877619] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.126 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.126 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.126 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.126 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.127 "name": "raid_bdev1", 00:15:07.127 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:07.127 "strip_size_kb": 64, 00:15:07.127 "state": "online", 00:15:07.127 "raid_level": "raid5f", 00:15:07.127 "superblock": false, 00:15:07.127 "num_base_bdevs": 4, 00:15:07.127 "num_base_bdevs_discovered": 4, 00:15:07.127 "num_base_bdevs_operational": 4, 00:15:07.127 "process": { 00:15:07.127 "type": "rebuild", 00:15:07.127 "target": "spare", 00:15:07.127 "progress": { 00:15:07.127 "blocks": 19200, 00:15:07.127 "percent": 9 00:15:07.127 } 00:15:07.127 }, 00:15:07.127 "base_bdevs_list": [ 00:15:07.127 { 00:15:07.127 "name": "spare", 00:15:07.127 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:07.127 "is_configured": true, 00:15:07.127 "data_offset": 0, 00:15:07.127 "data_size": 65536 00:15:07.127 }, 00:15:07.127 { 00:15:07.127 "name": "BaseBdev2", 00:15:07.127 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:07.127 "is_configured": true, 00:15:07.127 "data_offset": 0, 00:15:07.127 "data_size": 65536 00:15:07.127 }, 00:15:07.127 { 00:15:07.127 "name": "BaseBdev3", 00:15:07.127 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:07.127 "is_configured": true, 00:15:07.127 "data_offset": 0, 00:15:07.127 "data_size": 65536 00:15:07.127 }, 00:15:07.127 { 00:15:07.127 "name": "BaseBdev4", 00:15:07.127 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:07.127 "is_configured": true, 00:15:07.127 "data_offset": 0, 00:15:07.127 "data_size": 65536 00:15:07.127 } 00:15:07.127 ] 00:15:07.127 }' 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.127 01:35:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=522 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.127 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.387 "name": "raid_bdev1", 00:15:07.387 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:07.387 "strip_size_kb": 64, 00:15:07.387 "state": "online", 00:15:07.387 "raid_level": "raid5f", 00:15:07.387 "superblock": false, 00:15:07.387 "num_base_bdevs": 4, 00:15:07.387 "num_base_bdevs_discovered": 4, 00:15:07.387 "num_base_bdevs_operational": 4, 00:15:07.387 "process": { 00:15:07.387 "type": "rebuild", 00:15:07.387 "target": "spare", 00:15:07.387 "progress": { 00:15:07.387 "blocks": 21120, 00:15:07.387 "percent": 10 00:15:07.387 } 00:15:07.387 }, 00:15:07.387 "base_bdevs_list": [ 00:15:07.387 { 00:15:07.387 "name": "spare", 00:15:07.387 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:07.387 "is_configured": true, 00:15:07.387 "data_offset": 0, 00:15:07.387 "data_size": 65536 00:15:07.387 }, 00:15:07.387 { 00:15:07.387 "name": "BaseBdev2", 00:15:07.387 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:07.387 "is_configured": true, 00:15:07.387 "data_offset": 0, 00:15:07.387 "data_size": 65536 00:15:07.387 }, 00:15:07.387 { 00:15:07.387 "name": "BaseBdev3", 00:15:07.387 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:07.387 "is_configured": true, 00:15:07.387 "data_offset": 0, 00:15:07.387 "data_size": 65536 00:15:07.387 }, 00:15:07.387 { 00:15:07.387 "name": "BaseBdev4", 00:15:07.387 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:07.387 "is_configured": true, 00:15:07.387 "data_offset": 0, 00:15:07.387 "data_size": 65536 00:15:07.387 } 00:15:07.387 ] 00:15:07.387 }' 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.387 01:35:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.326 "name": "raid_bdev1", 00:15:08.326 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:08.326 "strip_size_kb": 64, 00:15:08.326 "state": "online", 00:15:08.326 "raid_level": "raid5f", 00:15:08.326 "superblock": false, 00:15:08.326 "num_base_bdevs": 4, 00:15:08.326 "num_base_bdevs_discovered": 4, 00:15:08.326 "num_base_bdevs_operational": 4, 00:15:08.326 "process": { 00:15:08.326 "type": "rebuild", 00:15:08.326 "target": "spare", 00:15:08.326 "progress": { 00:15:08.326 "blocks": 42240, 00:15:08.326 "percent": 21 00:15:08.326 } 00:15:08.326 }, 00:15:08.326 "base_bdevs_list": [ 00:15:08.326 { 00:15:08.326 "name": "spare", 00:15:08.326 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:08.326 "is_configured": true, 00:15:08.326 "data_offset": 0, 00:15:08.326 "data_size": 65536 00:15:08.326 }, 00:15:08.326 { 00:15:08.326 "name": "BaseBdev2", 00:15:08.326 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:08.326 "is_configured": true, 00:15:08.326 "data_offset": 0, 00:15:08.326 "data_size": 65536 00:15:08.326 }, 00:15:08.326 { 00:15:08.326 "name": "BaseBdev3", 00:15:08.326 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:08.326 "is_configured": true, 00:15:08.326 "data_offset": 0, 00:15:08.326 "data_size": 65536 00:15:08.326 }, 00:15:08.326 { 00:15:08.326 "name": "BaseBdev4", 00:15:08.326 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:08.326 "is_configured": true, 00:15:08.326 "data_offset": 0, 00:15:08.326 "data_size": 65536 00:15:08.326 } 00:15:08.326 ] 00:15:08.326 }' 00:15:08.326 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.585 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.585 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.585 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.585 01:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.525 "name": "raid_bdev1", 00:15:09.525 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:09.525 "strip_size_kb": 64, 00:15:09.525 "state": "online", 00:15:09.525 "raid_level": "raid5f", 00:15:09.525 "superblock": false, 00:15:09.525 "num_base_bdevs": 4, 00:15:09.525 "num_base_bdevs_discovered": 4, 00:15:09.525 "num_base_bdevs_operational": 4, 00:15:09.525 "process": { 00:15:09.525 "type": "rebuild", 00:15:09.525 "target": "spare", 00:15:09.525 "progress": { 00:15:09.525 "blocks": 63360, 00:15:09.525 "percent": 32 00:15:09.525 } 00:15:09.525 }, 00:15:09.525 "base_bdevs_list": [ 00:15:09.525 { 00:15:09.525 "name": "spare", 00:15:09.525 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:09.525 "is_configured": true, 00:15:09.525 "data_offset": 0, 00:15:09.525 "data_size": 65536 00:15:09.525 }, 00:15:09.525 { 00:15:09.525 "name": "BaseBdev2", 00:15:09.525 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:09.525 "is_configured": true, 00:15:09.525 "data_offset": 0, 00:15:09.525 "data_size": 65536 00:15:09.525 }, 00:15:09.525 { 00:15:09.525 "name": "BaseBdev3", 00:15:09.525 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:09.525 "is_configured": true, 00:15:09.525 "data_offset": 0, 00:15:09.525 "data_size": 65536 00:15:09.525 }, 00:15:09.525 { 00:15:09.525 "name": "BaseBdev4", 00:15:09.525 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:09.525 "is_configured": true, 00:15:09.525 "data_offset": 0, 00:15:09.525 "data_size": 65536 00:15:09.525 } 00:15:09.525 ] 00:15:09.525 }' 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.525 01:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.906 "name": "raid_bdev1", 00:15:10.906 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:10.906 "strip_size_kb": 64, 00:15:10.906 "state": "online", 00:15:10.906 "raid_level": "raid5f", 00:15:10.906 "superblock": false, 00:15:10.906 "num_base_bdevs": 4, 00:15:10.906 "num_base_bdevs_discovered": 4, 00:15:10.906 "num_base_bdevs_operational": 4, 00:15:10.906 "process": { 00:15:10.906 "type": "rebuild", 00:15:10.906 "target": "spare", 00:15:10.906 "progress": { 00:15:10.906 "blocks": 86400, 00:15:10.906 "percent": 43 00:15:10.906 } 00:15:10.906 }, 00:15:10.906 "base_bdevs_list": [ 00:15:10.906 { 00:15:10.906 "name": "spare", 00:15:10.906 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:10.906 "is_configured": true, 00:15:10.906 "data_offset": 0, 00:15:10.906 "data_size": 65536 00:15:10.906 }, 00:15:10.906 { 00:15:10.906 "name": "BaseBdev2", 00:15:10.906 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:10.906 "is_configured": true, 00:15:10.906 "data_offset": 0, 00:15:10.906 "data_size": 65536 00:15:10.906 }, 00:15:10.906 { 00:15:10.906 "name": "BaseBdev3", 00:15:10.906 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:10.906 "is_configured": true, 00:15:10.906 "data_offset": 0, 00:15:10.906 "data_size": 65536 00:15:10.906 }, 00:15:10.906 { 00:15:10.906 "name": "BaseBdev4", 00:15:10.906 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:10.906 "is_configured": true, 00:15:10.906 "data_offset": 0, 00:15:10.906 "data_size": 65536 00:15:10.906 } 00:15:10.906 ] 00:15:10.906 }' 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.906 01:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.846 "name": "raid_bdev1", 00:15:11.846 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:11.846 "strip_size_kb": 64, 00:15:11.846 "state": "online", 00:15:11.846 "raid_level": "raid5f", 00:15:11.846 "superblock": false, 00:15:11.846 "num_base_bdevs": 4, 00:15:11.846 "num_base_bdevs_discovered": 4, 00:15:11.846 "num_base_bdevs_operational": 4, 00:15:11.846 "process": { 00:15:11.846 "type": "rebuild", 00:15:11.846 "target": "spare", 00:15:11.846 "progress": { 00:15:11.846 "blocks": 107520, 00:15:11.846 "percent": 54 00:15:11.846 } 00:15:11.846 }, 00:15:11.846 "base_bdevs_list": [ 00:15:11.846 { 00:15:11.846 "name": "spare", 00:15:11.846 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:11.846 "is_configured": true, 00:15:11.846 "data_offset": 0, 00:15:11.846 "data_size": 65536 00:15:11.846 }, 00:15:11.846 { 00:15:11.846 "name": "BaseBdev2", 00:15:11.846 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:11.846 "is_configured": true, 00:15:11.846 "data_offset": 0, 00:15:11.846 "data_size": 65536 00:15:11.846 }, 00:15:11.846 { 00:15:11.846 "name": "BaseBdev3", 00:15:11.846 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:11.846 "is_configured": true, 00:15:11.846 "data_offset": 0, 00:15:11.846 "data_size": 65536 00:15:11.846 }, 00:15:11.846 { 00:15:11.846 "name": "BaseBdev4", 00:15:11.846 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:11.846 "is_configured": true, 00:15:11.846 "data_offset": 0, 00:15:11.846 "data_size": 65536 00:15:11.846 } 00:15:11.846 ] 00:15:11.846 }' 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.846 01:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.226 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.226 "name": "raid_bdev1", 00:15:13.226 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:13.226 "strip_size_kb": 64, 00:15:13.226 "state": "online", 00:15:13.226 "raid_level": "raid5f", 00:15:13.226 "superblock": false, 00:15:13.226 "num_base_bdevs": 4, 00:15:13.226 "num_base_bdevs_discovered": 4, 00:15:13.226 "num_base_bdevs_operational": 4, 00:15:13.226 "process": { 00:15:13.226 "type": "rebuild", 00:15:13.226 "target": "spare", 00:15:13.226 "progress": { 00:15:13.226 "blocks": 128640, 00:15:13.226 "percent": 65 00:15:13.226 } 00:15:13.226 }, 00:15:13.226 "base_bdevs_list": [ 00:15:13.226 { 00:15:13.226 "name": "spare", 00:15:13.227 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:13.227 "is_configured": true, 00:15:13.227 "data_offset": 0, 00:15:13.227 "data_size": 65536 00:15:13.227 }, 00:15:13.227 { 00:15:13.227 "name": "BaseBdev2", 00:15:13.227 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:13.227 "is_configured": true, 00:15:13.227 "data_offset": 0, 00:15:13.227 "data_size": 65536 00:15:13.227 }, 00:15:13.227 { 00:15:13.227 "name": "BaseBdev3", 00:15:13.227 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:13.227 "is_configured": true, 00:15:13.227 "data_offset": 0, 00:15:13.227 "data_size": 65536 00:15:13.227 }, 00:15:13.227 { 00:15:13.227 "name": "BaseBdev4", 00:15:13.227 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:13.227 "is_configured": true, 00:15:13.227 "data_offset": 0, 00:15:13.227 "data_size": 65536 00:15:13.227 } 00:15:13.227 ] 00:15:13.227 }' 00:15:13.227 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.227 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.227 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.227 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.227 01:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.166 "name": "raid_bdev1", 00:15:14.166 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:14.166 "strip_size_kb": 64, 00:15:14.166 "state": "online", 00:15:14.166 "raid_level": "raid5f", 00:15:14.166 "superblock": false, 00:15:14.166 "num_base_bdevs": 4, 00:15:14.166 "num_base_bdevs_discovered": 4, 00:15:14.166 "num_base_bdevs_operational": 4, 00:15:14.166 "process": { 00:15:14.166 "type": "rebuild", 00:15:14.166 "target": "spare", 00:15:14.166 "progress": { 00:15:14.166 "blocks": 151680, 00:15:14.166 "percent": 77 00:15:14.166 } 00:15:14.166 }, 00:15:14.166 "base_bdevs_list": [ 00:15:14.166 { 00:15:14.166 "name": "spare", 00:15:14.166 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:14.166 "is_configured": true, 00:15:14.166 "data_offset": 0, 00:15:14.166 "data_size": 65536 00:15:14.166 }, 00:15:14.166 { 00:15:14.166 "name": "BaseBdev2", 00:15:14.166 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:14.166 "is_configured": true, 00:15:14.166 "data_offset": 0, 00:15:14.166 "data_size": 65536 00:15:14.166 }, 00:15:14.166 { 00:15:14.166 "name": "BaseBdev3", 00:15:14.166 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:14.166 "is_configured": true, 00:15:14.166 "data_offset": 0, 00:15:14.166 "data_size": 65536 00:15:14.166 }, 00:15:14.166 { 00:15:14.166 "name": "BaseBdev4", 00:15:14.166 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:14.166 "is_configured": true, 00:15:14.166 "data_offset": 0, 00:15:14.166 "data_size": 65536 00:15:14.166 } 00:15:14.166 ] 00:15:14.166 }' 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.166 01:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.105 01:35:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.365 01:35:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.365 01:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.365 "name": "raid_bdev1", 00:15:15.365 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:15.365 "strip_size_kb": 64, 00:15:15.365 "state": "online", 00:15:15.365 "raid_level": "raid5f", 00:15:15.365 "superblock": false, 00:15:15.365 "num_base_bdevs": 4, 00:15:15.365 "num_base_bdevs_discovered": 4, 00:15:15.365 "num_base_bdevs_operational": 4, 00:15:15.365 "process": { 00:15:15.365 "type": "rebuild", 00:15:15.365 "target": "spare", 00:15:15.365 "progress": { 00:15:15.365 "blocks": 172800, 00:15:15.365 "percent": 87 00:15:15.365 } 00:15:15.365 }, 00:15:15.365 "base_bdevs_list": [ 00:15:15.365 { 00:15:15.365 "name": "spare", 00:15:15.365 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:15.365 "is_configured": true, 00:15:15.365 "data_offset": 0, 00:15:15.365 "data_size": 65536 00:15:15.365 }, 00:15:15.365 { 00:15:15.365 "name": "BaseBdev2", 00:15:15.365 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:15.365 "is_configured": true, 00:15:15.365 "data_offset": 0, 00:15:15.365 "data_size": 65536 00:15:15.365 }, 00:15:15.365 { 00:15:15.365 "name": "BaseBdev3", 00:15:15.365 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:15.365 "is_configured": true, 00:15:15.365 "data_offset": 0, 00:15:15.365 "data_size": 65536 00:15:15.365 }, 00:15:15.365 { 00:15:15.365 "name": "BaseBdev4", 00:15:15.365 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:15.365 "is_configured": true, 00:15:15.365 "data_offset": 0, 00:15:15.365 "data_size": 65536 00:15:15.365 } 00:15:15.365 ] 00:15:15.365 }' 00:15:15.365 01:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.365 01:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.365 01:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.365 01:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.365 01:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.304 "name": "raid_bdev1", 00:15:16.304 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:16.304 "strip_size_kb": 64, 00:15:16.304 "state": "online", 00:15:16.304 "raid_level": "raid5f", 00:15:16.304 "superblock": false, 00:15:16.304 "num_base_bdevs": 4, 00:15:16.304 "num_base_bdevs_discovered": 4, 00:15:16.304 "num_base_bdevs_operational": 4, 00:15:16.304 "process": { 00:15:16.304 "type": "rebuild", 00:15:16.304 "target": "spare", 00:15:16.304 "progress": { 00:15:16.304 "blocks": 195840, 00:15:16.304 "percent": 99 00:15:16.304 } 00:15:16.304 }, 00:15:16.304 "base_bdevs_list": [ 00:15:16.304 { 00:15:16.304 "name": "spare", 00:15:16.304 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:16.304 "is_configured": true, 00:15:16.304 "data_offset": 0, 00:15:16.304 "data_size": 65536 00:15:16.304 }, 00:15:16.304 { 00:15:16.304 "name": "BaseBdev2", 00:15:16.304 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:16.304 "is_configured": true, 00:15:16.304 "data_offset": 0, 00:15:16.304 "data_size": 65536 00:15:16.304 }, 00:15:16.304 { 00:15:16.304 "name": "BaseBdev3", 00:15:16.304 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:16.304 "is_configured": true, 00:15:16.304 "data_offset": 0, 00:15:16.304 "data_size": 65536 00:15:16.304 }, 00:15:16.304 { 00:15:16.304 "name": "BaseBdev4", 00:15:16.304 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:16.304 "is_configured": true, 00:15:16.304 "data_offset": 0, 00:15:16.304 "data_size": 65536 00:15:16.304 } 00:15:16.304 ] 00:15:16.304 }' 00:15:16.304 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.564 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.564 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.564 [2024-10-09 01:35:15.243903] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:16.564 [2024-10-09 01:35:15.243972] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:16.564 [2024-10-09 01:35:15.244041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.564 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.564 01:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.502 "name": "raid_bdev1", 00:15:17.502 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:17.502 "strip_size_kb": 64, 00:15:17.502 "state": "online", 00:15:17.502 "raid_level": "raid5f", 00:15:17.502 "superblock": false, 00:15:17.502 "num_base_bdevs": 4, 00:15:17.502 "num_base_bdevs_discovered": 4, 00:15:17.502 "num_base_bdevs_operational": 4, 00:15:17.502 "base_bdevs_list": [ 00:15:17.502 { 00:15:17.502 "name": "spare", 00:15:17.502 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:17.502 "is_configured": true, 00:15:17.502 "data_offset": 0, 00:15:17.502 "data_size": 65536 00:15:17.502 }, 00:15:17.502 { 00:15:17.502 "name": "BaseBdev2", 00:15:17.502 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:17.502 "is_configured": true, 00:15:17.502 "data_offset": 0, 00:15:17.502 "data_size": 65536 00:15:17.502 }, 00:15:17.502 { 00:15:17.502 "name": "BaseBdev3", 00:15:17.502 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:17.502 "is_configured": true, 00:15:17.502 "data_offset": 0, 00:15:17.502 "data_size": 65536 00:15:17.502 }, 00:15:17.502 { 00:15:17.502 "name": "BaseBdev4", 00:15:17.502 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:17.502 "is_configured": true, 00:15:17.502 "data_offset": 0, 00:15:17.502 "data_size": 65536 00:15:17.502 } 00:15:17.502 ] 00:15:17.502 }' 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:17.502 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.762 "name": "raid_bdev1", 00:15:17.762 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:17.762 "strip_size_kb": 64, 00:15:17.762 "state": "online", 00:15:17.762 "raid_level": "raid5f", 00:15:17.762 "superblock": false, 00:15:17.762 "num_base_bdevs": 4, 00:15:17.762 "num_base_bdevs_discovered": 4, 00:15:17.762 "num_base_bdevs_operational": 4, 00:15:17.762 "base_bdevs_list": [ 00:15:17.762 { 00:15:17.762 "name": "spare", 00:15:17.762 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:17.762 "is_configured": true, 00:15:17.762 "data_offset": 0, 00:15:17.762 "data_size": 65536 00:15:17.762 }, 00:15:17.762 { 00:15:17.762 "name": "BaseBdev2", 00:15:17.762 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:17.762 "is_configured": true, 00:15:17.762 "data_offset": 0, 00:15:17.762 "data_size": 65536 00:15:17.762 }, 00:15:17.762 { 00:15:17.762 "name": "BaseBdev3", 00:15:17.762 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:17.762 "is_configured": true, 00:15:17.762 "data_offset": 0, 00:15:17.762 "data_size": 65536 00:15:17.762 }, 00:15:17.762 { 00:15:17.762 "name": "BaseBdev4", 00:15:17.762 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:17.762 "is_configured": true, 00:15:17.762 "data_offset": 0, 00:15:17.762 "data_size": 65536 00:15:17.762 } 00:15:17.762 ] 00:15:17.762 }' 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.762 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.763 "name": "raid_bdev1", 00:15:17.763 "uuid": "05ac5bac-3f90-4209-8b45-459b801b3102", 00:15:17.763 "strip_size_kb": 64, 00:15:17.763 "state": "online", 00:15:17.763 "raid_level": "raid5f", 00:15:17.763 "superblock": false, 00:15:17.763 "num_base_bdevs": 4, 00:15:17.763 "num_base_bdevs_discovered": 4, 00:15:17.763 "num_base_bdevs_operational": 4, 00:15:17.763 "base_bdevs_list": [ 00:15:17.763 { 00:15:17.763 "name": "spare", 00:15:17.763 "uuid": "ffe68fd8-37d1-53b9-8219-e2f09eb5c5f0", 00:15:17.763 "is_configured": true, 00:15:17.763 "data_offset": 0, 00:15:17.763 "data_size": 65536 00:15:17.763 }, 00:15:17.763 { 00:15:17.763 "name": "BaseBdev2", 00:15:17.763 "uuid": "b4ae549f-da51-5682-8a86-e9b0cd2fb94b", 00:15:17.763 "is_configured": true, 00:15:17.763 "data_offset": 0, 00:15:17.763 "data_size": 65536 00:15:17.763 }, 00:15:17.763 { 00:15:17.763 "name": "BaseBdev3", 00:15:17.763 "uuid": "fbf0932e-7b16-593a-a5b6-c2d1b16f8836", 00:15:17.763 "is_configured": true, 00:15:17.763 "data_offset": 0, 00:15:17.763 "data_size": 65536 00:15:17.763 }, 00:15:17.763 { 00:15:17.763 "name": "BaseBdev4", 00:15:17.763 "uuid": "4fb031bf-ece0-5a3b-9b37-a686529573cb", 00:15:17.763 "is_configured": true, 00:15:17.763 "data_offset": 0, 00:15:17.763 "data_size": 65536 00:15:17.763 } 00:15:17.763 ] 00:15:17.763 }' 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.763 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.334 [2024-10-09 01:35:16.977178] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.334 [2024-10-09 01:35:16.977211] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.334 [2024-10-09 01:35:16.977302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.334 [2024-10-09 01:35:16.977394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.334 [2024-10-09 01:35:16.977411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:18.334 01:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:18.334 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.335 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:18.335 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.335 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.335 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:18.594 /dev/nbd0 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.594 1+0 records in 00:15:18.594 1+0 records out 00:15:18.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208484 s, 19.6 MB/s 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.594 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:18.594 /dev/nbd1 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.854 1+0 records in 00:15:18.854 1+0 records out 00:15:18.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436162 s, 9.4 MB/s 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.854 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 96182 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 96182 ']' 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 96182 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.114 01:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96182 00:15:19.374 killing process with pid 96182 00:15:19.374 Received shutdown signal, test time was about 60.000000 seconds 00:15:19.374 00:15:19.374 Latency(us) 00:15:19.374 [2024-10-09T01:35:18.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.374 [2024-10-09T01:35:18.267Z] =================================================================================================================== 00:15:19.374 [2024-10-09T01:35:18.267Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:19.374 01:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.374 01:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.374 01:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96182' 00:15:19.374 01:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 96182 00:15:19.374 [2024-10-09 01:35:18.015986] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.374 01:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 96182 00:15:19.374 [2024-10-09 01:35:18.107268] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.636 01:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:19.636 00:15:19.636 real 0m18.671s 00:15:19.636 user 0m22.262s 00:15:19.636 sys 0m2.540s 00:15:19.636 01:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.636 01:35:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.636 ************************************ 00:15:19.636 END TEST raid5f_rebuild_test 00:15:19.636 ************************************ 00:15:19.636 01:35:18 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:19.636 01:35:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:19.636 01:35:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.906 01:35:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.906 ************************************ 00:15:19.906 START TEST raid5f_rebuild_test_sb 00:15:19.906 ************************************ 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=96686 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 96686 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 96686 ']' 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.906 01:35:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.906 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:19.906 Zero copy mechanism will not be used. 00:15:19.906 [2024-10-09 01:35:18.636619] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:19.906 [2024-10-09 01:35:18.636746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96686 ] 00:15:19.906 [2024-10-09 01:35:18.768031] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:20.182 [2024-10-09 01:35:18.797520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.182 [2024-10-09 01:35:18.868877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.182 [2024-10-09 01:35:18.944822] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.182 [2024-10-09 01:35:18.944865] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.751 BaseBdev1_malloc 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.751 [2024-10-09 01:35:19.471890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:20.751 [2024-10-09 01:35:19.471964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.751 [2024-10-09 01:35:19.471992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:20.751 [2024-10-09 01:35:19.472009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.751 [2024-10-09 01:35:19.474536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.751 [2024-10-09 01:35:19.474572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.751 BaseBdev1 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.751 BaseBdev2_malloc 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.751 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.751 [2024-10-09 01:35:19.522571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:20.751 [2024-10-09 01:35:19.522815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.752 [2024-10-09 01:35:19.522869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:20.752 [2024-10-09 01:35:19.522898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.752 [2024-10-09 01:35:19.527173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.752 [2024-10-09 01:35:19.527231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:20.752 BaseBdev2 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 BaseBdev3_malloc 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 [2024-10-09 01:35:19.559348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:20.752 [2024-10-09 01:35:19.559402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.752 [2024-10-09 01:35:19.559423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:20.752 [2024-10-09 01:35:19.559434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.752 [2024-10-09 01:35:19.561786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.752 [2024-10-09 01:35:19.561896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:20.752 BaseBdev3 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 BaseBdev4_malloc 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 [2024-10-09 01:35:19.593964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:20.752 [2024-10-09 01:35:19.594027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.752 [2024-10-09 01:35:19.594057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:20.752 [2024-10-09 01:35:19.594069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.752 [2024-10-09 01:35:19.596338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.752 [2024-10-09 01:35:19.596374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:20.752 BaseBdev4 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 spare_malloc 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 spare_delay 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.752 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.752 [2024-10-09 01:35:19.640663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.752 [2024-10-09 01:35:19.640799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.752 [2024-10-09 01:35:19.640838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:20.752 [2024-10-09 01:35:19.640874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.012 [2024-10-09 01:35:19.643333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.012 [2024-10-09 01:35:19.643407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:21.012 spare 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.012 [2024-10-09 01:35:19.652793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.012 [2024-10-09 01:35:19.654902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.012 [2024-10-09 01:35:19.654959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:21.012 [2024-10-09 01:35:19.654999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:21.012 [2024-10-09 01:35:19.655166] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:21.012 [2024-10-09 01:35:19.655182] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:21.012 [2024-10-09 01:35:19.655420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:21.012 [2024-10-09 01:35:19.655910] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:21.012 [2024-10-09 01:35:19.655974] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:21.012 [2024-10-09 01:35:19.656102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.012 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.012 "name": "raid_bdev1", 00:15:21.012 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:21.012 "strip_size_kb": 64, 00:15:21.012 "state": "online", 00:15:21.012 "raid_level": "raid5f", 00:15:21.012 "superblock": true, 00:15:21.012 "num_base_bdevs": 4, 00:15:21.012 "num_base_bdevs_discovered": 4, 00:15:21.012 "num_base_bdevs_operational": 4, 00:15:21.012 "base_bdevs_list": [ 00:15:21.012 { 00:15:21.012 "name": "BaseBdev1", 00:15:21.012 "uuid": "a730e704-a4bb-5e1a-9c6f-1e4fdcc411f2", 00:15:21.012 "is_configured": true, 00:15:21.012 "data_offset": 2048, 00:15:21.012 "data_size": 63488 00:15:21.012 }, 00:15:21.012 { 00:15:21.012 "name": "BaseBdev2", 00:15:21.012 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:21.012 "is_configured": true, 00:15:21.012 "data_offset": 2048, 00:15:21.012 "data_size": 63488 00:15:21.012 }, 00:15:21.012 { 00:15:21.012 "name": "BaseBdev3", 00:15:21.012 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:21.012 "is_configured": true, 00:15:21.012 "data_offset": 2048, 00:15:21.012 "data_size": 63488 00:15:21.012 }, 00:15:21.012 { 00:15:21.012 "name": "BaseBdev4", 00:15:21.012 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:21.012 "is_configured": true, 00:15:21.012 "data_offset": 2048, 00:15:21.012 "data_size": 63488 00:15:21.012 } 00:15:21.012 ] 00:15:21.013 }' 00:15:21.013 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.013 01:35:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.272 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.272 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.272 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.272 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:21.272 [2024-10-09 01:35:20.127310] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.272 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:21.531 [2024-10-09 01:35:20.371202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:21.531 /dev/nbd0 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:21.531 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:21.791 1+0 records in 00:15:21.791 1+0 records out 00:15:21.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567406 s, 7.2 MB/s 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:21.791 01:35:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:22.361 496+0 records in 00:15:22.361 496+0 records out 00:15:22.361 97517568 bytes (98 MB, 93 MiB) copied, 0.564372 s, 173 MB/s 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.361 [2024-10-09 01:35:21.215603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.361 [2024-10-09 01:35:21.227700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.361 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.620 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.620 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.620 "name": "raid_bdev1", 00:15:22.620 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:22.620 "strip_size_kb": 64, 00:15:22.620 "state": "online", 00:15:22.620 "raid_level": "raid5f", 00:15:22.620 "superblock": true, 00:15:22.620 "num_base_bdevs": 4, 00:15:22.620 "num_base_bdevs_discovered": 3, 00:15:22.620 "num_base_bdevs_operational": 3, 00:15:22.620 "base_bdevs_list": [ 00:15:22.620 { 00:15:22.620 "name": null, 00:15:22.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.620 "is_configured": false, 00:15:22.620 "data_offset": 0, 00:15:22.620 "data_size": 63488 00:15:22.620 }, 00:15:22.620 { 00:15:22.620 "name": "BaseBdev2", 00:15:22.621 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:22.621 "is_configured": true, 00:15:22.621 "data_offset": 2048, 00:15:22.621 "data_size": 63488 00:15:22.621 }, 00:15:22.621 { 00:15:22.621 "name": "BaseBdev3", 00:15:22.621 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:22.621 "is_configured": true, 00:15:22.621 "data_offset": 2048, 00:15:22.621 "data_size": 63488 00:15:22.621 }, 00:15:22.621 { 00:15:22.621 "name": "BaseBdev4", 00:15:22.621 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:22.621 "is_configured": true, 00:15:22.621 "data_offset": 2048, 00:15:22.621 "data_size": 63488 00:15:22.621 } 00:15:22.621 ] 00:15:22.621 }' 00:15:22.621 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.621 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.879 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.880 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.880 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.880 [2024-10-09 01:35:21.675783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.880 [2024-10-09 01:35:21.681555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:15:22.880 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.880 01:35:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:22.880 [2024-10-09 01:35:21.683959] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.818 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.077 "name": "raid_bdev1", 00:15:24.077 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:24.077 "strip_size_kb": 64, 00:15:24.077 "state": "online", 00:15:24.077 "raid_level": "raid5f", 00:15:24.077 "superblock": true, 00:15:24.077 "num_base_bdevs": 4, 00:15:24.077 "num_base_bdevs_discovered": 4, 00:15:24.077 "num_base_bdevs_operational": 4, 00:15:24.077 "process": { 00:15:24.077 "type": "rebuild", 00:15:24.077 "target": "spare", 00:15:24.077 "progress": { 00:15:24.077 "blocks": 19200, 00:15:24.077 "percent": 10 00:15:24.077 } 00:15:24.077 }, 00:15:24.077 "base_bdevs_list": [ 00:15:24.077 { 00:15:24.077 "name": "spare", 00:15:24.077 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:24.077 "is_configured": true, 00:15:24.077 "data_offset": 2048, 00:15:24.077 "data_size": 63488 00:15:24.077 }, 00:15:24.077 { 00:15:24.077 "name": "BaseBdev2", 00:15:24.077 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:24.077 "is_configured": true, 00:15:24.077 "data_offset": 2048, 00:15:24.077 "data_size": 63488 00:15:24.077 }, 00:15:24.077 { 00:15:24.077 "name": "BaseBdev3", 00:15:24.077 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:24.077 "is_configured": true, 00:15:24.077 "data_offset": 2048, 00:15:24.077 "data_size": 63488 00:15:24.077 }, 00:15:24.077 { 00:15:24.077 "name": "BaseBdev4", 00:15:24.077 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:24.077 "is_configured": true, 00:15:24.077 "data_offset": 2048, 00:15:24.077 "data_size": 63488 00:15:24.077 } 00:15:24.077 ] 00:15:24.077 }' 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.077 [2024-10-09 01:35:22.821488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.077 [2024-10-09 01:35:22.892436] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:24.077 [2024-10-09 01:35:22.892498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.077 [2024-10-09 01:35:22.892515] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.077 [2024-10-09 01:35:22.892541] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.077 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.077 "name": "raid_bdev1", 00:15:24.077 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:24.077 "strip_size_kb": 64, 00:15:24.077 "state": "online", 00:15:24.077 "raid_level": "raid5f", 00:15:24.077 "superblock": true, 00:15:24.077 "num_base_bdevs": 4, 00:15:24.077 "num_base_bdevs_discovered": 3, 00:15:24.077 "num_base_bdevs_operational": 3, 00:15:24.077 "base_bdevs_list": [ 00:15:24.077 { 00:15:24.077 "name": null, 00:15:24.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.077 "is_configured": false, 00:15:24.077 "data_offset": 0, 00:15:24.077 "data_size": 63488 00:15:24.077 }, 00:15:24.077 { 00:15:24.077 "name": "BaseBdev2", 00:15:24.077 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:24.077 "is_configured": true, 00:15:24.077 "data_offset": 2048, 00:15:24.077 "data_size": 63488 00:15:24.077 }, 00:15:24.077 { 00:15:24.077 "name": "BaseBdev3", 00:15:24.077 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:24.077 "is_configured": true, 00:15:24.077 "data_offset": 2048, 00:15:24.077 "data_size": 63488 00:15:24.077 }, 00:15:24.077 { 00:15:24.077 "name": "BaseBdev4", 00:15:24.078 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:24.078 "is_configured": true, 00:15:24.078 "data_offset": 2048, 00:15:24.078 "data_size": 63488 00:15:24.078 } 00:15:24.078 ] 00:15:24.078 }' 00:15:24.078 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.078 01:35:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.647 "name": "raid_bdev1", 00:15:24.647 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:24.647 "strip_size_kb": 64, 00:15:24.647 "state": "online", 00:15:24.647 "raid_level": "raid5f", 00:15:24.647 "superblock": true, 00:15:24.647 "num_base_bdevs": 4, 00:15:24.647 "num_base_bdevs_discovered": 3, 00:15:24.647 "num_base_bdevs_operational": 3, 00:15:24.647 "base_bdevs_list": [ 00:15:24.647 { 00:15:24.647 "name": null, 00:15:24.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.647 "is_configured": false, 00:15:24.647 "data_offset": 0, 00:15:24.647 "data_size": 63488 00:15:24.647 }, 00:15:24.647 { 00:15:24.647 "name": "BaseBdev2", 00:15:24.647 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:24.647 "is_configured": true, 00:15:24.647 "data_offset": 2048, 00:15:24.647 "data_size": 63488 00:15:24.647 }, 00:15:24.647 { 00:15:24.647 "name": "BaseBdev3", 00:15:24.647 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:24.647 "is_configured": true, 00:15:24.647 "data_offset": 2048, 00:15:24.647 "data_size": 63488 00:15:24.647 }, 00:15:24.647 { 00:15:24.647 "name": "BaseBdev4", 00:15:24.647 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:24.647 "is_configured": true, 00:15:24.647 "data_offset": 2048, 00:15:24.647 "data_size": 63488 00:15:24.647 } 00:15:24.647 ] 00:15:24.647 }' 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.647 [2024-10-09 01:35:23.520841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.647 [2024-10-09 01:35:23.525355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002abf0 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.647 01:35:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:24.647 [2024-10-09 01:35:23.527786] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.026 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.026 "name": "raid_bdev1", 00:15:26.026 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:26.026 "strip_size_kb": 64, 00:15:26.026 "state": "online", 00:15:26.026 "raid_level": "raid5f", 00:15:26.026 "superblock": true, 00:15:26.026 "num_base_bdevs": 4, 00:15:26.026 "num_base_bdevs_discovered": 4, 00:15:26.026 "num_base_bdevs_operational": 4, 00:15:26.026 "process": { 00:15:26.026 "type": "rebuild", 00:15:26.026 "target": "spare", 00:15:26.026 "progress": { 00:15:26.026 "blocks": 19200, 00:15:26.026 "percent": 10 00:15:26.026 } 00:15:26.026 }, 00:15:26.026 "base_bdevs_list": [ 00:15:26.026 { 00:15:26.026 "name": "spare", 00:15:26.026 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:26.026 "is_configured": true, 00:15:26.026 "data_offset": 2048, 00:15:26.026 "data_size": 63488 00:15:26.026 }, 00:15:26.026 { 00:15:26.026 "name": "BaseBdev2", 00:15:26.026 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:26.026 "is_configured": true, 00:15:26.026 "data_offset": 2048, 00:15:26.026 "data_size": 63488 00:15:26.026 }, 00:15:26.026 { 00:15:26.026 "name": "BaseBdev3", 00:15:26.027 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:26.027 "is_configured": true, 00:15:26.027 "data_offset": 2048, 00:15:26.027 "data_size": 63488 00:15:26.027 }, 00:15:26.027 { 00:15:26.027 "name": "BaseBdev4", 00:15:26.027 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:26.027 "is_configured": true, 00:15:26.027 "data_offset": 2048, 00:15:26.027 "data_size": 63488 00:15:26.027 } 00:15:26.027 ] 00:15:26.027 }' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:26.027 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=540 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.027 "name": "raid_bdev1", 00:15:26.027 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:26.027 "strip_size_kb": 64, 00:15:26.027 "state": "online", 00:15:26.027 "raid_level": "raid5f", 00:15:26.027 "superblock": true, 00:15:26.027 "num_base_bdevs": 4, 00:15:26.027 "num_base_bdevs_discovered": 4, 00:15:26.027 "num_base_bdevs_operational": 4, 00:15:26.027 "process": { 00:15:26.027 "type": "rebuild", 00:15:26.027 "target": "spare", 00:15:26.027 "progress": { 00:15:26.027 "blocks": 21120, 00:15:26.027 "percent": 11 00:15:26.027 } 00:15:26.027 }, 00:15:26.027 "base_bdevs_list": [ 00:15:26.027 { 00:15:26.027 "name": "spare", 00:15:26.027 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:26.027 "is_configured": true, 00:15:26.027 "data_offset": 2048, 00:15:26.027 "data_size": 63488 00:15:26.027 }, 00:15:26.027 { 00:15:26.027 "name": "BaseBdev2", 00:15:26.027 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:26.027 "is_configured": true, 00:15:26.027 "data_offset": 2048, 00:15:26.027 "data_size": 63488 00:15:26.027 }, 00:15:26.027 { 00:15:26.027 "name": "BaseBdev3", 00:15:26.027 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:26.027 "is_configured": true, 00:15:26.027 "data_offset": 2048, 00:15:26.027 "data_size": 63488 00:15:26.027 }, 00:15:26.027 { 00:15:26.027 "name": "BaseBdev4", 00:15:26.027 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:26.027 "is_configured": true, 00:15:26.027 "data_offset": 2048, 00:15:26.027 "data_size": 63488 00:15:26.027 } 00:15:26.027 ] 00:15:26.027 }' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.027 01:35:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.964 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.965 "name": "raid_bdev1", 00:15:26.965 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:26.965 "strip_size_kb": 64, 00:15:26.965 "state": "online", 00:15:26.965 "raid_level": "raid5f", 00:15:26.965 "superblock": true, 00:15:26.965 "num_base_bdevs": 4, 00:15:26.965 "num_base_bdevs_discovered": 4, 00:15:26.965 "num_base_bdevs_operational": 4, 00:15:26.965 "process": { 00:15:26.965 "type": "rebuild", 00:15:26.965 "target": "spare", 00:15:26.965 "progress": { 00:15:26.965 "blocks": 42240, 00:15:26.965 "percent": 22 00:15:26.965 } 00:15:26.965 }, 00:15:26.965 "base_bdevs_list": [ 00:15:26.965 { 00:15:26.965 "name": "spare", 00:15:26.965 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:26.965 "is_configured": true, 00:15:26.965 "data_offset": 2048, 00:15:26.965 "data_size": 63488 00:15:26.965 }, 00:15:26.965 { 00:15:26.965 "name": "BaseBdev2", 00:15:26.965 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:26.965 "is_configured": true, 00:15:26.965 "data_offset": 2048, 00:15:26.965 "data_size": 63488 00:15:26.965 }, 00:15:26.965 { 00:15:26.965 "name": "BaseBdev3", 00:15:26.965 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:26.965 "is_configured": true, 00:15:26.965 "data_offset": 2048, 00:15:26.965 "data_size": 63488 00:15:26.965 }, 00:15:26.965 { 00:15:26.965 "name": "BaseBdev4", 00:15:26.965 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:26.965 "is_configured": true, 00:15:26.965 "data_offset": 2048, 00:15:26.965 "data_size": 63488 00:15:26.965 } 00:15:26.965 ] 00:15:26.965 }' 00:15:26.965 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.224 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.224 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.224 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.224 01:35:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.162 01:35:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.162 "name": "raid_bdev1", 00:15:28.162 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:28.162 "strip_size_kb": 64, 00:15:28.162 "state": "online", 00:15:28.162 "raid_level": "raid5f", 00:15:28.162 "superblock": true, 00:15:28.162 "num_base_bdevs": 4, 00:15:28.162 "num_base_bdevs_discovered": 4, 00:15:28.162 "num_base_bdevs_operational": 4, 00:15:28.162 "process": { 00:15:28.162 "type": "rebuild", 00:15:28.162 "target": "spare", 00:15:28.162 "progress": { 00:15:28.162 "blocks": 65280, 00:15:28.162 "percent": 34 00:15:28.162 } 00:15:28.162 }, 00:15:28.162 "base_bdevs_list": [ 00:15:28.162 { 00:15:28.162 "name": "spare", 00:15:28.162 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:28.162 "is_configured": true, 00:15:28.162 "data_offset": 2048, 00:15:28.162 "data_size": 63488 00:15:28.162 }, 00:15:28.162 { 00:15:28.162 "name": "BaseBdev2", 00:15:28.162 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:28.162 "is_configured": true, 00:15:28.162 "data_offset": 2048, 00:15:28.162 "data_size": 63488 00:15:28.162 }, 00:15:28.162 { 00:15:28.162 "name": "BaseBdev3", 00:15:28.162 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:28.162 "is_configured": true, 00:15:28.162 "data_offset": 2048, 00:15:28.162 "data_size": 63488 00:15:28.162 }, 00:15:28.162 { 00:15:28.162 "name": "BaseBdev4", 00:15:28.162 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:28.162 "is_configured": true, 00:15:28.162 "data_offset": 2048, 00:15:28.162 "data_size": 63488 00:15:28.162 } 00:15:28.162 ] 00:15:28.162 }' 00:15:28.162 01:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.162 01:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.162 01:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.422 01:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.422 01:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.361 "name": "raid_bdev1", 00:15:29.361 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:29.361 "strip_size_kb": 64, 00:15:29.361 "state": "online", 00:15:29.361 "raid_level": "raid5f", 00:15:29.361 "superblock": true, 00:15:29.361 "num_base_bdevs": 4, 00:15:29.361 "num_base_bdevs_discovered": 4, 00:15:29.361 "num_base_bdevs_operational": 4, 00:15:29.361 "process": { 00:15:29.361 "type": "rebuild", 00:15:29.361 "target": "spare", 00:15:29.361 "progress": { 00:15:29.361 "blocks": 86400, 00:15:29.361 "percent": 45 00:15:29.361 } 00:15:29.361 }, 00:15:29.361 "base_bdevs_list": [ 00:15:29.361 { 00:15:29.361 "name": "spare", 00:15:29.361 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:29.361 "is_configured": true, 00:15:29.361 "data_offset": 2048, 00:15:29.361 "data_size": 63488 00:15:29.361 }, 00:15:29.361 { 00:15:29.361 "name": "BaseBdev2", 00:15:29.361 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:29.361 "is_configured": true, 00:15:29.361 "data_offset": 2048, 00:15:29.361 "data_size": 63488 00:15:29.361 }, 00:15:29.361 { 00:15:29.361 "name": "BaseBdev3", 00:15:29.361 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:29.361 "is_configured": true, 00:15:29.361 "data_offset": 2048, 00:15:29.361 "data_size": 63488 00:15:29.361 }, 00:15:29.361 { 00:15:29.361 "name": "BaseBdev4", 00:15:29.361 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:29.361 "is_configured": true, 00:15:29.361 "data_offset": 2048, 00:15:29.361 "data_size": 63488 00:15:29.361 } 00:15:29.361 ] 00:15:29.361 }' 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.361 01:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.301 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.301 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.301 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.301 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.301 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.301 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.561 "name": "raid_bdev1", 00:15:30.561 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:30.561 "strip_size_kb": 64, 00:15:30.561 "state": "online", 00:15:30.561 "raid_level": "raid5f", 00:15:30.561 "superblock": true, 00:15:30.561 "num_base_bdevs": 4, 00:15:30.561 "num_base_bdevs_discovered": 4, 00:15:30.561 "num_base_bdevs_operational": 4, 00:15:30.561 "process": { 00:15:30.561 "type": "rebuild", 00:15:30.561 "target": "spare", 00:15:30.561 "progress": { 00:15:30.561 "blocks": 107520, 00:15:30.561 "percent": 56 00:15:30.561 } 00:15:30.561 }, 00:15:30.561 "base_bdevs_list": [ 00:15:30.561 { 00:15:30.561 "name": "spare", 00:15:30.561 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:30.561 "is_configured": true, 00:15:30.561 "data_offset": 2048, 00:15:30.561 "data_size": 63488 00:15:30.561 }, 00:15:30.561 { 00:15:30.561 "name": "BaseBdev2", 00:15:30.561 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:30.561 "is_configured": true, 00:15:30.561 "data_offset": 2048, 00:15:30.561 "data_size": 63488 00:15:30.561 }, 00:15:30.561 { 00:15:30.561 "name": "BaseBdev3", 00:15:30.561 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:30.561 "is_configured": true, 00:15:30.561 "data_offset": 2048, 00:15:30.561 "data_size": 63488 00:15:30.561 }, 00:15:30.561 { 00:15:30.561 "name": "BaseBdev4", 00:15:30.561 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:30.561 "is_configured": true, 00:15:30.561 "data_offset": 2048, 00:15:30.561 "data_size": 63488 00:15:30.561 } 00:15:30.561 ] 00:15:30.561 }' 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.561 01:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.500 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.760 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.760 "name": "raid_bdev1", 00:15:31.760 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:31.760 "strip_size_kb": 64, 00:15:31.760 "state": "online", 00:15:31.760 "raid_level": "raid5f", 00:15:31.760 "superblock": true, 00:15:31.760 "num_base_bdevs": 4, 00:15:31.760 "num_base_bdevs_discovered": 4, 00:15:31.760 "num_base_bdevs_operational": 4, 00:15:31.760 "process": { 00:15:31.760 "type": "rebuild", 00:15:31.760 "target": "spare", 00:15:31.760 "progress": { 00:15:31.760 "blocks": 128640, 00:15:31.760 "percent": 67 00:15:31.760 } 00:15:31.760 }, 00:15:31.760 "base_bdevs_list": [ 00:15:31.760 { 00:15:31.760 "name": "spare", 00:15:31.760 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:31.760 "is_configured": true, 00:15:31.760 "data_offset": 2048, 00:15:31.760 "data_size": 63488 00:15:31.760 }, 00:15:31.760 { 00:15:31.760 "name": "BaseBdev2", 00:15:31.760 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:31.760 "is_configured": true, 00:15:31.760 "data_offset": 2048, 00:15:31.760 "data_size": 63488 00:15:31.760 }, 00:15:31.760 { 00:15:31.760 "name": "BaseBdev3", 00:15:31.760 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:31.760 "is_configured": true, 00:15:31.760 "data_offset": 2048, 00:15:31.760 "data_size": 63488 00:15:31.760 }, 00:15:31.760 { 00:15:31.760 "name": "BaseBdev4", 00:15:31.760 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:31.760 "is_configured": true, 00:15:31.760 "data_offset": 2048, 00:15:31.760 "data_size": 63488 00:15:31.760 } 00:15:31.760 ] 00:15:31.760 }' 00:15:31.760 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.760 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.760 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.760 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.760 01:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.699 "name": "raid_bdev1", 00:15:32.699 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:32.699 "strip_size_kb": 64, 00:15:32.699 "state": "online", 00:15:32.699 "raid_level": "raid5f", 00:15:32.699 "superblock": true, 00:15:32.699 "num_base_bdevs": 4, 00:15:32.699 "num_base_bdevs_discovered": 4, 00:15:32.699 "num_base_bdevs_operational": 4, 00:15:32.699 "process": { 00:15:32.699 "type": "rebuild", 00:15:32.699 "target": "spare", 00:15:32.699 "progress": { 00:15:32.699 "blocks": 151680, 00:15:32.699 "percent": 79 00:15:32.699 } 00:15:32.699 }, 00:15:32.699 "base_bdevs_list": [ 00:15:32.699 { 00:15:32.699 "name": "spare", 00:15:32.699 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:32.699 "is_configured": true, 00:15:32.699 "data_offset": 2048, 00:15:32.699 "data_size": 63488 00:15:32.699 }, 00:15:32.699 { 00:15:32.699 "name": "BaseBdev2", 00:15:32.699 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:32.699 "is_configured": true, 00:15:32.699 "data_offset": 2048, 00:15:32.699 "data_size": 63488 00:15:32.699 }, 00:15:32.699 { 00:15:32.699 "name": "BaseBdev3", 00:15:32.699 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:32.699 "is_configured": true, 00:15:32.699 "data_offset": 2048, 00:15:32.699 "data_size": 63488 00:15:32.699 }, 00:15:32.699 { 00:15:32.699 "name": "BaseBdev4", 00:15:32.699 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:32.699 "is_configured": true, 00:15:32.699 "data_offset": 2048, 00:15:32.699 "data_size": 63488 00:15:32.699 } 00:15:32.699 ] 00:15:32.699 }' 00:15:32.699 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.959 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.959 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.959 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.959 01:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.898 "name": "raid_bdev1", 00:15:33.898 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:33.898 "strip_size_kb": 64, 00:15:33.898 "state": "online", 00:15:33.898 "raid_level": "raid5f", 00:15:33.898 "superblock": true, 00:15:33.898 "num_base_bdevs": 4, 00:15:33.898 "num_base_bdevs_discovered": 4, 00:15:33.898 "num_base_bdevs_operational": 4, 00:15:33.898 "process": { 00:15:33.898 "type": "rebuild", 00:15:33.898 "target": "spare", 00:15:33.898 "progress": { 00:15:33.898 "blocks": 172800, 00:15:33.898 "percent": 90 00:15:33.898 } 00:15:33.898 }, 00:15:33.898 "base_bdevs_list": [ 00:15:33.898 { 00:15:33.898 "name": "spare", 00:15:33.898 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:33.898 "is_configured": true, 00:15:33.898 "data_offset": 2048, 00:15:33.898 "data_size": 63488 00:15:33.898 }, 00:15:33.898 { 00:15:33.898 "name": "BaseBdev2", 00:15:33.898 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:33.898 "is_configured": true, 00:15:33.898 "data_offset": 2048, 00:15:33.898 "data_size": 63488 00:15:33.898 }, 00:15:33.898 { 00:15:33.898 "name": "BaseBdev3", 00:15:33.898 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:33.898 "is_configured": true, 00:15:33.898 "data_offset": 2048, 00:15:33.898 "data_size": 63488 00:15:33.898 }, 00:15:33.898 { 00:15:33.898 "name": "BaseBdev4", 00:15:33.898 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:33.898 "is_configured": true, 00:15:33.898 "data_offset": 2048, 00:15:33.898 "data_size": 63488 00:15:33.898 } 00:15:33.898 ] 00:15:33.898 }' 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.898 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.157 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.157 01:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.727 [2024-10-09 01:35:33.591749] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:34.727 [2024-10-09 01:35:33.591879] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:34.727 [2024-10-09 01:35:33.592059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.987 "name": "raid_bdev1", 00:15:34.987 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:34.987 "strip_size_kb": 64, 00:15:34.987 "state": "online", 00:15:34.987 "raid_level": "raid5f", 00:15:34.987 "superblock": true, 00:15:34.987 "num_base_bdevs": 4, 00:15:34.987 "num_base_bdevs_discovered": 4, 00:15:34.987 "num_base_bdevs_operational": 4, 00:15:34.987 "base_bdevs_list": [ 00:15:34.987 { 00:15:34.987 "name": "spare", 00:15:34.987 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:34.987 "is_configured": true, 00:15:34.987 "data_offset": 2048, 00:15:34.987 "data_size": 63488 00:15:34.987 }, 00:15:34.987 { 00:15:34.987 "name": "BaseBdev2", 00:15:34.987 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:34.987 "is_configured": true, 00:15:34.987 "data_offset": 2048, 00:15:34.987 "data_size": 63488 00:15:34.987 }, 00:15:34.987 { 00:15:34.987 "name": "BaseBdev3", 00:15:34.987 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:34.987 "is_configured": true, 00:15:34.987 "data_offset": 2048, 00:15:34.987 "data_size": 63488 00:15:34.987 }, 00:15:34.987 { 00:15:34.987 "name": "BaseBdev4", 00:15:34.987 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:34.987 "is_configured": true, 00:15:34.987 "data_offset": 2048, 00:15:34.987 "data_size": 63488 00:15:34.987 } 00:15:34.987 ] 00:15:34.987 }' 00:15:34.987 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.249 01:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.249 "name": "raid_bdev1", 00:15:35.249 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:35.249 "strip_size_kb": 64, 00:15:35.249 "state": "online", 00:15:35.249 "raid_level": "raid5f", 00:15:35.249 "superblock": true, 00:15:35.249 "num_base_bdevs": 4, 00:15:35.249 "num_base_bdevs_discovered": 4, 00:15:35.249 "num_base_bdevs_operational": 4, 00:15:35.249 "base_bdevs_list": [ 00:15:35.249 { 00:15:35.249 "name": "spare", 00:15:35.249 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:35.249 "is_configured": true, 00:15:35.249 "data_offset": 2048, 00:15:35.249 "data_size": 63488 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "name": "BaseBdev2", 00:15:35.249 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:35.249 "is_configured": true, 00:15:35.249 "data_offset": 2048, 00:15:35.249 "data_size": 63488 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "name": "BaseBdev3", 00:15:35.249 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:35.249 "is_configured": true, 00:15:35.249 "data_offset": 2048, 00:15:35.249 "data_size": 63488 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "name": "BaseBdev4", 00:15:35.249 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:35.249 "is_configured": true, 00:15:35.249 "data_offset": 2048, 00:15:35.249 "data_size": 63488 00:15:35.249 } 00:15:35.249 ] 00:15:35.249 }' 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.249 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.250 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.250 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.250 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.250 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.250 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.519 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.519 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.519 "name": "raid_bdev1", 00:15:35.519 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:35.519 "strip_size_kb": 64, 00:15:35.519 "state": "online", 00:15:35.519 "raid_level": "raid5f", 00:15:35.519 "superblock": true, 00:15:35.519 "num_base_bdevs": 4, 00:15:35.519 "num_base_bdevs_discovered": 4, 00:15:35.519 "num_base_bdevs_operational": 4, 00:15:35.519 "base_bdevs_list": [ 00:15:35.519 { 00:15:35.519 "name": "spare", 00:15:35.519 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:35.519 "is_configured": true, 00:15:35.519 "data_offset": 2048, 00:15:35.519 "data_size": 63488 00:15:35.519 }, 00:15:35.519 { 00:15:35.519 "name": "BaseBdev2", 00:15:35.519 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:35.519 "is_configured": true, 00:15:35.519 "data_offset": 2048, 00:15:35.519 "data_size": 63488 00:15:35.519 }, 00:15:35.519 { 00:15:35.519 "name": "BaseBdev3", 00:15:35.519 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:35.519 "is_configured": true, 00:15:35.519 "data_offset": 2048, 00:15:35.519 "data_size": 63488 00:15:35.519 }, 00:15:35.519 { 00:15:35.519 "name": "BaseBdev4", 00:15:35.519 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:35.519 "is_configured": true, 00:15:35.519 "data_offset": 2048, 00:15:35.519 "data_size": 63488 00:15:35.519 } 00:15:35.519 ] 00:15:35.519 }' 00:15:35.519 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.519 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.789 [2024-10-09 01:35:34.564903] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.789 [2024-10-09 01:35:34.564939] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.789 [2024-10-09 01:35:34.565019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.789 [2024-10-09 01:35:34.565109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.789 [2024-10-09 01:35:34.565122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.789 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:36.049 /dev/nbd0 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.049 1+0 records in 00:15:36.049 1+0 records out 00:15:36.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534266 s, 7.7 MB/s 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.049 01:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:36.308 /dev/nbd1 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:36.308 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.309 1+0 records in 00:15:36.309 1+0 records out 00:15:36.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479177 s, 8.5 MB/s 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.309 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.568 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.828 [2024-10-09 01:35:35.605460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.828 [2024-10-09 01:35:35.605578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.828 [2024-10-09 01:35:35.605607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:36.828 [2024-10-09 01:35:35.605617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.828 [2024-10-09 01:35:35.608081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.828 [2024-10-09 01:35:35.608117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.828 [2024-10-09 01:35:35.608193] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:36.828 [2024-10-09 01:35:35.608244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.828 [2024-10-09 01:35:35.608356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.828 [2024-10-09 01:35:35.608436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.828 [2024-10-09 01:35:35.608503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.828 spare 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.828 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.828 [2024-10-09 01:35:35.708584] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:36.828 [2024-10-09 01:35:35.708619] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:36.828 [2024-10-09 01:35:35.708893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:15:36.828 [2024-10-09 01:35:35.709373] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:36.828 [2024-10-09 01:35:35.709384] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:36.829 [2024-10-09 01:35:35.709562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.829 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.088 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.088 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.088 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.088 "name": "raid_bdev1", 00:15:37.088 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:37.088 "strip_size_kb": 64, 00:15:37.088 "state": "online", 00:15:37.088 "raid_level": "raid5f", 00:15:37.088 "superblock": true, 00:15:37.088 "num_base_bdevs": 4, 00:15:37.088 "num_base_bdevs_discovered": 4, 00:15:37.088 "num_base_bdevs_operational": 4, 00:15:37.088 "base_bdevs_list": [ 00:15:37.088 { 00:15:37.088 "name": "spare", 00:15:37.088 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:37.088 "is_configured": true, 00:15:37.088 "data_offset": 2048, 00:15:37.088 "data_size": 63488 00:15:37.088 }, 00:15:37.088 { 00:15:37.088 "name": "BaseBdev2", 00:15:37.088 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:37.088 "is_configured": true, 00:15:37.088 "data_offset": 2048, 00:15:37.088 "data_size": 63488 00:15:37.088 }, 00:15:37.088 { 00:15:37.088 "name": "BaseBdev3", 00:15:37.088 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:37.088 "is_configured": true, 00:15:37.088 "data_offset": 2048, 00:15:37.088 "data_size": 63488 00:15:37.088 }, 00:15:37.088 { 00:15:37.088 "name": "BaseBdev4", 00:15:37.088 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:37.088 "is_configured": true, 00:15:37.088 "data_offset": 2048, 00:15:37.088 "data_size": 63488 00:15:37.088 } 00:15:37.088 ] 00:15:37.088 }' 00:15:37.088 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.088 01:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.348 "name": "raid_bdev1", 00:15:37.348 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:37.348 "strip_size_kb": 64, 00:15:37.348 "state": "online", 00:15:37.348 "raid_level": "raid5f", 00:15:37.348 "superblock": true, 00:15:37.348 "num_base_bdevs": 4, 00:15:37.348 "num_base_bdevs_discovered": 4, 00:15:37.348 "num_base_bdevs_operational": 4, 00:15:37.348 "base_bdevs_list": [ 00:15:37.348 { 00:15:37.348 "name": "spare", 00:15:37.348 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:37.348 "is_configured": true, 00:15:37.348 "data_offset": 2048, 00:15:37.348 "data_size": 63488 00:15:37.348 }, 00:15:37.348 { 00:15:37.348 "name": "BaseBdev2", 00:15:37.348 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:37.348 "is_configured": true, 00:15:37.348 "data_offset": 2048, 00:15:37.348 "data_size": 63488 00:15:37.348 }, 00:15:37.348 { 00:15:37.348 "name": "BaseBdev3", 00:15:37.348 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:37.348 "is_configured": true, 00:15:37.348 "data_offset": 2048, 00:15:37.348 "data_size": 63488 00:15:37.348 }, 00:15:37.348 { 00:15:37.348 "name": "BaseBdev4", 00:15:37.348 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:37.348 "is_configured": true, 00:15:37.348 "data_offset": 2048, 00:15:37.348 "data_size": 63488 00:15:37.348 } 00:15:37.348 ] 00:15:37.348 }' 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.348 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.608 [2024-10-09 01:35:36.289747] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.608 "name": "raid_bdev1", 00:15:37.608 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:37.608 "strip_size_kb": 64, 00:15:37.608 "state": "online", 00:15:37.608 "raid_level": "raid5f", 00:15:37.608 "superblock": true, 00:15:37.608 "num_base_bdevs": 4, 00:15:37.608 "num_base_bdevs_discovered": 3, 00:15:37.608 "num_base_bdevs_operational": 3, 00:15:37.608 "base_bdevs_list": [ 00:15:37.608 { 00:15:37.608 "name": null, 00:15:37.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.608 "is_configured": false, 00:15:37.608 "data_offset": 0, 00:15:37.608 "data_size": 63488 00:15:37.608 }, 00:15:37.608 { 00:15:37.608 "name": "BaseBdev2", 00:15:37.608 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:37.608 "is_configured": true, 00:15:37.608 "data_offset": 2048, 00:15:37.608 "data_size": 63488 00:15:37.608 }, 00:15:37.608 { 00:15:37.608 "name": "BaseBdev3", 00:15:37.608 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:37.608 "is_configured": true, 00:15:37.608 "data_offset": 2048, 00:15:37.608 "data_size": 63488 00:15:37.608 }, 00:15:37.608 { 00:15:37.608 "name": "BaseBdev4", 00:15:37.608 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:37.608 "is_configured": true, 00:15:37.608 "data_offset": 2048, 00:15:37.608 "data_size": 63488 00:15:37.608 } 00:15:37.608 ] 00:15:37.608 }' 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.608 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.868 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.868 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.868 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.868 [2024-10-09 01:35:36.713958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.868 [2024-10-09 01:35:36.714103] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:37.868 [2024-10-09 01:35:36.714130] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:37.868 [2024-10-09 01:35:36.714164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.868 [2024-10-09 01:35:36.719717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:15:37.868 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.868 01:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:37.868 [2024-10-09 01:35:36.722167] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:39.248 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.249 "name": "raid_bdev1", 00:15:39.249 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:39.249 "strip_size_kb": 64, 00:15:39.249 "state": "online", 00:15:39.249 "raid_level": "raid5f", 00:15:39.249 "superblock": true, 00:15:39.249 "num_base_bdevs": 4, 00:15:39.249 "num_base_bdevs_discovered": 4, 00:15:39.249 "num_base_bdevs_operational": 4, 00:15:39.249 "process": { 00:15:39.249 "type": "rebuild", 00:15:39.249 "target": "spare", 00:15:39.249 "progress": { 00:15:39.249 "blocks": 19200, 00:15:39.249 "percent": 10 00:15:39.249 } 00:15:39.249 }, 00:15:39.249 "base_bdevs_list": [ 00:15:39.249 { 00:15:39.249 "name": "spare", 00:15:39.249 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:39.249 "is_configured": true, 00:15:39.249 "data_offset": 2048, 00:15:39.249 "data_size": 63488 00:15:39.249 }, 00:15:39.249 { 00:15:39.249 "name": "BaseBdev2", 00:15:39.249 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:39.249 "is_configured": true, 00:15:39.249 "data_offset": 2048, 00:15:39.249 "data_size": 63488 00:15:39.249 }, 00:15:39.249 { 00:15:39.249 "name": "BaseBdev3", 00:15:39.249 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:39.249 "is_configured": true, 00:15:39.249 "data_offset": 2048, 00:15:39.249 "data_size": 63488 00:15:39.249 }, 00:15:39.249 { 00:15:39.249 "name": "BaseBdev4", 00:15:39.249 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:39.249 "is_configured": true, 00:15:39.249 "data_offset": 2048, 00:15:39.249 "data_size": 63488 00:15:39.249 } 00:15:39.249 ] 00:15:39.249 }' 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.249 [2024-10-09 01:35:37.884012] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.249 [2024-10-09 01:35:37.930428] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.249 [2024-10-09 01:35:37.930548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.249 [2024-10-09 01:35:37.930587] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.249 [2024-10-09 01:35:37.930612] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.249 "name": "raid_bdev1", 00:15:39.249 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:39.249 "strip_size_kb": 64, 00:15:39.249 "state": "online", 00:15:39.249 "raid_level": "raid5f", 00:15:39.249 "superblock": true, 00:15:39.249 "num_base_bdevs": 4, 00:15:39.249 "num_base_bdevs_discovered": 3, 00:15:39.249 "num_base_bdevs_operational": 3, 00:15:39.249 "base_bdevs_list": [ 00:15:39.249 { 00:15:39.249 "name": null, 00:15:39.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.249 "is_configured": false, 00:15:39.249 "data_offset": 0, 00:15:39.249 "data_size": 63488 00:15:39.249 }, 00:15:39.249 { 00:15:39.249 "name": "BaseBdev2", 00:15:39.249 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:39.249 "is_configured": true, 00:15:39.249 "data_offset": 2048, 00:15:39.249 "data_size": 63488 00:15:39.249 }, 00:15:39.249 { 00:15:39.249 "name": "BaseBdev3", 00:15:39.249 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:39.249 "is_configured": true, 00:15:39.249 "data_offset": 2048, 00:15:39.249 "data_size": 63488 00:15:39.249 }, 00:15:39.249 { 00:15:39.249 "name": "BaseBdev4", 00:15:39.249 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:39.249 "is_configured": true, 00:15:39.249 "data_offset": 2048, 00:15:39.249 "data_size": 63488 00:15:39.249 } 00:15:39.249 ] 00:15:39.249 }' 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.249 01:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.509 01:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.509 01:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.509 01:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.509 [2024-10-09 01:35:38.374257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.509 [2024-10-09 01:35:38.374361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.509 [2024-10-09 01:35:38.374404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:39.509 [2024-10-09 01:35:38.374434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.509 [2024-10-09 01:35:38.374937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.509 [2024-10-09 01:35:38.374999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.509 [2024-10-09 01:35:38.375115] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:39.509 [2024-10-09 01:35:38.375161] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.509 [2024-10-09 01:35:38.375198] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:39.509 [2024-10-09 01:35:38.375285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.509 [2024-10-09 01:35:38.379184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049440 00:15:39.509 spare 00:15:39.509 01:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.509 01:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:39.509 [2024-10-09 01:35:38.381658] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.890 "name": "raid_bdev1", 00:15:40.890 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:40.890 "strip_size_kb": 64, 00:15:40.890 "state": "online", 00:15:40.890 "raid_level": "raid5f", 00:15:40.890 "superblock": true, 00:15:40.890 "num_base_bdevs": 4, 00:15:40.890 "num_base_bdevs_discovered": 4, 00:15:40.890 "num_base_bdevs_operational": 4, 00:15:40.890 "process": { 00:15:40.890 "type": "rebuild", 00:15:40.890 "target": "spare", 00:15:40.890 "progress": { 00:15:40.890 "blocks": 19200, 00:15:40.890 "percent": 10 00:15:40.890 } 00:15:40.890 }, 00:15:40.890 "base_bdevs_list": [ 00:15:40.890 { 00:15:40.890 "name": "spare", 00:15:40.890 "uuid": "95a3b644-3d81-54a7-8e62-d7f8b429410a", 00:15:40.890 "is_configured": true, 00:15:40.890 "data_offset": 2048, 00:15:40.890 "data_size": 63488 00:15:40.890 }, 00:15:40.890 { 00:15:40.890 "name": "BaseBdev2", 00:15:40.890 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:40.890 "is_configured": true, 00:15:40.890 "data_offset": 2048, 00:15:40.890 "data_size": 63488 00:15:40.890 }, 00:15:40.890 { 00:15:40.890 "name": "BaseBdev3", 00:15:40.890 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:40.890 "is_configured": true, 00:15:40.890 "data_offset": 2048, 00:15:40.890 "data_size": 63488 00:15:40.890 }, 00:15:40.890 { 00:15:40.890 "name": "BaseBdev4", 00:15:40.890 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:40.890 "is_configured": true, 00:15:40.890 "data_offset": 2048, 00:15:40.890 "data_size": 63488 00:15:40.890 } 00:15:40.890 ] 00:15:40.890 }' 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.890 [2024-10-09 01:35:39.515598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.890 [2024-10-09 01:35:39.590106] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:40.890 [2024-10-09 01:35:39.590204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.890 [2024-10-09 01:35:39.590241] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.890 [2024-10-09 01:35:39.590261] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.890 "name": "raid_bdev1", 00:15:40.890 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:40.890 "strip_size_kb": 64, 00:15:40.890 "state": "online", 00:15:40.890 "raid_level": "raid5f", 00:15:40.890 "superblock": true, 00:15:40.890 "num_base_bdevs": 4, 00:15:40.890 "num_base_bdevs_discovered": 3, 00:15:40.890 "num_base_bdevs_operational": 3, 00:15:40.890 "base_bdevs_list": [ 00:15:40.890 { 00:15:40.890 "name": null, 00:15:40.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.890 "is_configured": false, 00:15:40.890 "data_offset": 0, 00:15:40.890 "data_size": 63488 00:15:40.890 }, 00:15:40.890 { 00:15:40.890 "name": "BaseBdev2", 00:15:40.890 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:40.890 "is_configured": true, 00:15:40.890 "data_offset": 2048, 00:15:40.890 "data_size": 63488 00:15:40.890 }, 00:15:40.890 { 00:15:40.890 "name": "BaseBdev3", 00:15:40.890 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:40.890 "is_configured": true, 00:15:40.890 "data_offset": 2048, 00:15:40.890 "data_size": 63488 00:15:40.890 }, 00:15:40.890 { 00:15:40.890 "name": "BaseBdev4", 00:15:40.890 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:40.890 "is_configured": true, 00:15:40.890 "data_offset": 2048, 00:15:40.890 "data_size": 63488 00:15:40.890 } 00:15:40.890 ] 00:15:40.890 }' 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.890 01:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.460 "name": "raid_bdev1", 00:15:41.460 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:41.460 "strip_size_kb": 64, 00:15:41.460 "state": "online", 00:15:41.460 "raid_level": "raid5f", 00:15:41.460 "superblock": true, 00:15:41.460 "num_base_bdevs": 4, 00:15:41.460 "num_base_bdevs_discovered": 3, 00:15:41.460 "num_base_bdevs_operational": 3, 00:15:41.460 "base_bdevs_list": [ 00:15:41.460 { 00:15:41.460 "name": null, 00:15:41.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.460 "is_configured": false, 00:15:41.460 "data_offset": 0, 00:15:41.460 "data_size": 63488 00:15:41.460 }, 00:15:41.460 { 00:15:41.460 "name": "BaseBdev2", 00:15:41.460 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:41.460 "is_configured": true, 00:15:41.460 "data_offset": 2048, 00:15:41.460 "data_size": 63488 00:15:41.460 }, 00:15:41.460 { 00:15:41.460 "name": "BaseBdev3", 00:15:41.460 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:41.460 "is_configured": true, 00:15:41.460 "data_offset": 2048, 00:15:41.460 "data_size": 63488 00:15:41.460 }, 00:15:41.460 { 00:15:41.460 "name": "BaseBdev4", 00:15:41.460 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:41.460 "is_configured": true, 00:15:41.460 "data_offset": 2048, 00:15:41.460 "data_size": 63488 00:15:41.460 } 00:15:41.460 ] 00:15:41.460 }' 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.460 [2024-10-09 01:35:40.221881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:41.460 [2024-10-09 01:35:40.221930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.460 [2024-10-09 01:35:40.221953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:41.460 [2024-10-09 01:35:40.221962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.460 [2024-10-09 01:35:40.222391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.460 [2024-10-09 01:35:40.222407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:41.460 [2024-10-09 01:35:40.222480] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:41.460 [2024-10-09 01:35:40.222493] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:41.460 [2024-10-09 01:35:40.222503] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:41.460 [2024-10-09 01:35:40.222513] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:41.460 BaseBdev1 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.460 01:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.400 "name": "raid_bdev1", 00:15:42.400 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:42.400 "strip_size_kb": 64, 00:15:42.400 "state": "online", 00:15:42.400 "raid_level": "raid5f", 00:15:42.400 "superblock": true, 00:15:42.400 "num_base_bdevs": 4, 00:15:42.400 "num_base_bdevs_discovered": 3, 00:15:42.400 "num_base_bdevs_operational": 3, 00:15:42.400 "base_bdevs_list": [ 00:15:42.400 { 00:15:42.400 "name": null, 00:15:42.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.400 "is_configured": false, 00:15:42.400 "data_offset": 0, 00:15:42.400 "data_size": 63488 00:15:42.400 }, 00:15:42.400 { 00:15:42.400 "name": "BaseBdev2", 00:15:42.400 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:42.400 "is_configured": true, 00:15:42.400 "data_offset": 2048, 00:15:42.400 "data_size": 63488 00:15:42.400 }, 00:15:42.400 { 00:15:42.400 "name": "BaseBdev3", 00:15:42.400 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:42.400 "is_configured": true, 00:15:42.400 "data_offset": 2048, 00:15:42.400 "data_size": 63488 00:15:42.400 }, 00:15:42.400 { 00:15:42.400 "name": "BaseBdev4", 00:15:42.400 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:42.400 "is_configured": true, 00:15:42.400 "data_offset": 2048, 00:15:42.400 "data_size": 63488 00:15:42.400 } 00:15:42.400 ] 00:15:42.400 }' 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.400 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.970 "name": "raid_bdev1", 00:15:42.970 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:42.970 "strip_size_kb": 64, 00:15:42.970 "state": "online", 00:15:42.970 "raid_level": "raid5f", 00:15:42.970 "superblock": true, 00:15:42.970 "num_base_bdevs": 4, 00:15:42.970 "num_base_bdevs_discovered": 3, 00:15:42.970 "num_base_bdevs_operational": 3, 00:15:42.970 "base_bdevs_list": [ 00:15:42.970 { 00:15:42.970 "name": null, 00:15:42.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.970 "is_configured": false, 00:15:42.970 "data_offset": 0, 00:15:42.970 "data_size": 63488 00:15:42.970 }, 00:15:42.970 { 00:15:42.970 "name": "BaseBdev2", 00:15:42.970 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:42.970 "is_configured": true, 00:15:42.970 "data_offset": 2048, 00:15:42.970 "data_size": 63488 00:15:42.970 }, 00:15:42.970 { 00:15:42.970 "name": "BaseBdev3", 00:15:42.970 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:42.970 "is_configured": true, 00:15:42.970 "data_offset": 2048, 00:15:42.970 "data_size": 63488 00:15:42.970 }, 00:15:42.970 { 00:15:42.970 "name": "BaseBdev4", 00:15:42.970 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:42.970 "is_configured": true, 00:15:42.970 "data_offset": 2048, 00:15:42.970 "data_size": 63488 00:15:42.970 } 00:15:42.970 ] 00:15:42.970 }' 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.970 [2024-10-09 01:35:41.850312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.970 [2024-10-09 01:35:41.850495] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:42.970 [2024-10-09 01:35:41.850515] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:42.970 request: 00:15:42.970 { 00:15:42.970 "base_bdev": "BaseBdev1", 00:15:42.970 "raid_bdev": "raid_bdev1", 00:15:42.970 "method": "bdev_raid_add_base_bdev", 00:15:42.970 "req_id": 1 00:15:42.970 } 00:15:42.970 Got JSON-RPC error response 00:15:42.970 response: 00:15:42.970 { 00:15:42.970 "code": -22, 00:15:42.970 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:42.970 } 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:42.970 01:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.352 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.353 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.353 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.353 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.353 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.353 "name": "raid_bdev1", 00:15:44.353 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:44.353 "strip_size_kb": 64, 00:15:44.353 "state": "online", 00:15:44.353 "raid_level": "raid5f", 00:15:44.353 "superblock": true, 00:15:44.353 "num_base_bdevs": 4, 00:15:44.353 "num_base_bdevs_discovered": 3, 00:15:44.353 "num_base_bdevs_operational": 3, 00:15:44.353 "base_bdevs_list": [ 00:15:44.353 { 00:15:44.353 "name": null, 00:15:44.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.353 "is_configured": false, 00:15:44.353 "data_offset": 0, 00:15:44.353 "data_size": 63488 00:15:44.353 }, 00:15:44.353 { 00:15:44.353 "name": "BaseBdev2", 00:15:44.353 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:44.353 "is_configured": true, 00:15:44.353 "data_offset": 2048, 00:15:44.353 "data_size": 63488 00:15:44.353 }, 00:15:44.353 { 00:15:44.353 "name": "BaseBdev3", 00:15:44.353 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:44.353 "is_configured": true, 00:15:44.353 "data_offset": 2048, 00:15:44.353 "data_size": 63488 00:15:44.353 }, 00:15:44.353 { 00:15:44.353 "name": "BaseBdev4", 00:15:44.353 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:44.353 "is_configured": true, 00:15:44.353 "data_offset": 2048, 00:15:44.353 "data_size": 63488 00:15:44.353 } 00:15:44.353 ] 00:15:44.353 }' 00:15:44.353 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.353 01:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.613 "name": "raid_bdev1", 00:15:44.613 "uuid": "7d3c4013-d2ff-4dbf-9c19-28c8aa22813a", 00:15:44.613 "strip_size_kb": 64, 00:15:44.613 "state": "online", 00:15:44.613 "raid_level": "raid5f", 00:15:44.613 "superblock": true, 00:15:44.613 "num_base_bdevs": 4, 00:15:44.613 "num_base_bdevs_discovered": 3, 00:15:44.613 "num_base_bdevs_operational": 3, 00:15:44.613 "base_bdevs_list": [ 00:15:44.613 { 00:15:44.613 "name": null, 00:15:44.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.613 "is_configured": false, 00:15:44.613 "data_offset": 0, 00:15:44.613 "data_size": 63488 00:15:44.613 }, 00:15:44.613 { 00:15:44.613 "name": "BaseBdev2", 00:15:44.613 "uuid": "88da29a8-2279-5377-a32e-16b82c58ddc3", 00:15:44.613 "is_configured": true, 00:15:44.613 "data_offset": 2048, 00:15:44.613 "data_size": 63488 00:15:44.613 }, 00:15:44.613 { 00:15:44.613 "name": "BaseBdev3", 00:15:44.613 "uuid": "ef16d893-804c-527b-b06e-f3ead38665bb", 00:15:44.613 "is_configured": true, 00:15:44.613 "data_offset": 2048, 00:15:44.613 "data_size": 63488 00:15:44.613 }, 00:15:44.613 { 00:15:44.613 "name": "BaseBdev4", 00:15:44.613 "uuid": "c4d39d85-cdf3-5dfb-b4a1-e2b722334e42", 00:15:44.613 "is_configured": true, 00:15:44.613 "data_offset": 2048, 00:15:44.613 "data_size": 63488 00:15:44.613 } 00:15:44.613 ] 00:15:44.613 }' 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 96686 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 96686 ']' 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 96686 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96686 00:15:44.613 killing process with pid 96686 00:15:44.613 Received shutdown signal, test time was about 60.000000 seconds 00:15:44.613 00:15:44.613 Latency(us) 00:15:44.613 [2024-10-09T01:35:43.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.613 [2024-10-09T01:35:43.506Z] =================================================================================================================== 00:15:44.613 [2024-10-09T01:35:43.506Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96686' 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 96686 00:15:44.613 [2024-10-09 01:35:43.439130] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.613 [2024-10-09 01:35:43.439227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.613 [2024-10-09 01:35:43.439294] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.613 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 96686 00:15:44.613 [2024-10-09 01:35:43.439306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:44.873 [2024-10-09 01:35:43.531657] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.134 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:45.134 00:15:45.134 real 0m25.349s 00:15:45.134 user 0m31.915s 00:15:45.134 sys 0m3.122s 00:15:45.134 ************************************ 00:15:45.134 END TEST raid5f_rebuild_test_sb 00:15:45.134 ************************************ 00:15:45.134 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.134 01:35:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.134 01:35:43 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:45.134 01:35:43 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:45.134 01:35:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:45.134 01:35:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.134 01:35:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:45.134 ************************************ 00:15:45.134 START TEST raid_state_function_test_sb_4k 00:15:45.134 ************************************ 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=97480 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97480' 00:15:45.134 Process raid pid: 97480 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 97480 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 97480 ']' 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.134 01:35:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:45.394 [2024-10-09 01:35:44.065994] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:45.394 [2024-10-09 01:35:44.066225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.394 [2024-10-09 01:35:44.199863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:45.394 [2024-10-09 01:35:44.229592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.653 [2024-10-09 01:35:44.299554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.653 [2024-10-09 01:35:44.375438] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.654 [2024-10-09 01:35:44.375479] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.222 [2024-10-09 01:35:44.888154] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.222 [2024-10-09 01:35:44.888294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.222 [2024-10-09 01:35:44.888310] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.222 [2024-10-09 01:35:44.888318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.222 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.223 "name": "Existed_Raid", 00:15:46.223 "uuid": "c96f9fec-3001-4d33-9b62-dca45bbf6759", 00:15:46.223 "strip_size_kb": 0, 00:15:46.223 "state": "configuring", 00:15:46.223 "raid_level": "raid1", 00:15:46.223 "superblock": true, 00:15:46.223 "num_base_bdevs": 2, 00:15:46.223 "num_base_bdevs_discovered": 0, 00:15:46.223 "num_base_bdevs_operational": 2, 00:15:46.223 "base_bdevs_list": [ 00:15:46.223 { 00:15:46.223 "name": "BaseBdev1", 00:15:46.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.223 "is_configured": false, 00:15:46.223 "data_offset": 0, 00:15:46.223 "data_size": 0 00:15:46.223 }, 00:15:46.223 { 00:15:46.223 "name": "BaseBdev2", 00:15:46.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.223 "is_configured": false, 00:15:46.223 "data_offset": 0, 00:15:46.223 "data_size": 0 00:15:46.223 } 00:15:46.223 ] 00:15:46.223 }' 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.223 01:35:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.482 [2024-10-09 01:35:45.332158] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.482 [2024-10-09 01:35:45.332195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.482 [2024-10-09 01:35:45.344162] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.482 [2024-10-09 01:35:45.344198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.482 [2024-10-09 01:35:45.344209] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.482 [2024-10-09 01:35:45.344215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.482 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.482 [2024-10-09 01:35:45.371194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.742 BaseBdev1 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.742 [ 00:15:46.742 { 00:15:46.742 "name": "BaseBdev1", 00:15:46.742 "aliases": [ 00:15:46.742 "4ed646fd-48e1-4805-ac54-973b9b99a8ee" 00:15:46.742 ], 00:15:46.742 "product_name": "Malloc disk", 00:15:46.742 "block_size": 4096, 00:15:46.742 "num_blocks": 8192, 00:15:46.742 "uuid": "4ed646fd-48e1-4805-ac54-973b9b99a8ee", 00:15:46.742 "assigned_rate_limits": { 00:15:46.742 "rw_ios_per_sec": 0, 00:15:46.742 "rw_mbytes_per_sec": 0, 00:15:46.742 "r_mbytes_per_sec": 0, 00:15:46.742 "w_mbytes_per_sec": 0 00:15:46.742 }, 00:15:46.742 "claimed": true, 00:15:46.742 "claim_type": "exclusive_write", 00:15:46.742 "zoned": false, 00:15:46.742 "supported_io_types": { 00:15:46.742 "read": true, 00:15:46.742 "write": true, 00:15:46.742 "unmap": true, 00:15:46.742 "flush": true, 00:15:46.742 "reset": true, 00:15:46.742 "nvme_admin": false, 00:15:46.742 "nvme_io": false, 00:15:46.742 "nvme_io_md": false, 00:15:46.742 "write_zeroes": true, 00:15:46.742 "zcopy": true, 00:15:46.742 "get_zone_info": false, 00:15:46.742 "zone_management": false, 00:15:46.742 "zone_append": false, 00:15:46.742 "compare": false, 00:15:46.742 "compare_and_write": false, 00:15:46.742 "abort": true, 00:15:46.742 "seek_hole": false, 00:15:46.742 "seek_data": false, 00:15:46.742 "copy": true, 00:15:46.742 "nvme_iov_md": false 00:15:46.742 }, 00:15:46.742 "memory_domains": [ 00:15:46.742 { 00:15:46.742 "dma_device_id": "system", 00:15:46.742 "dma_device_type": 1 00:15:46.742 }, 00:15:46.742 { 00:15:46.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.742 "dma_device_type": 2 00:15:46.742 } 00:15:46.742 ], 00:15:46.742 "driver_specific": {} 00:15:46.742 } 00:15:46.742 ] 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.742 "name": "Existed_Raid", 00:15:46.742 "uuid": "79c5f63e-b12e-4ecd-b53b-1928bd5eec90", 00:15:46.742 "strip_size_kb": 0, 00:15:46.742 "state": "configuring", 00:15:46.742 "raid_level": "raid1", 00:15:46.742 "superblock": true, 00:15:46.742 "num_base_bdevs": 2, 00:15:46.742 "num_base_bdevs_discovered": 1, 00:15:46.742 "num_base_bdevs_operational": 2, 00:15:46.742 "base_bdevs_list": [ 00:15:46.742 { 00:15:46.742 "name": "BaseBdev1", 00:15:46.742 "uuid": "4ed646fd-48e1-4805-ac54-973b9b99a8ee", 00:15:46.742 "is_configured": true, 00:15:46.742 "data_offset": 256, 00:15:46.742 "data_size": 7936 00:15:46.742 }, 00:15:46.742 { 00:15:46.742 "name": "BaseBdev2", 00:15:46.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.742 "is_configured": false, 00:15:46.742 "data_offset": 0, 00:15:46.742 "data_size": 0 00:15:46.742 } 00:15:46.742 ] 00:15:46.742 }' 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.742 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.002 [2024-10-09 01:35:45.851333] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.002 [2024-10-09 01:35:45.851376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.002 [2024-10-09 01:35:45.863348] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.002 [2024-10-09 01:35:45.865433] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.002 [2024-10-09 01:35:45.865470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.002 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.262 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.262 "name": "Existed_Raid", 00:15:47.262 "uuid": "17ca72ad-bff5-4839-8e1e-e8bf83e02738", 00:15:47.262 "strip_size_kb": 0, 00:15:47.262 "state": "configuring", 00:15:47.262 "raid_level": "raid1", 00:15:47.262 "superblock": true, 00:15:47.262 "num_base_bdevs": 2, 00:15:47.262 "num_base_bdevs_discovered": 1, 00:15:47.262 "num_base_bdevs_operational": 2, 00:15:47.262 "base_bdevs_list": [ 00:15:47.262 { 00:15:47.262 "name": "BaseBdev1", 00:15:47.262 "uuid": "4ed646fd-48e1-4805-ac54-973b9b99a8ee", 00:15:47.262 "is_configured": true, 00:15:47.262 "data_offset": 256, 00:15:47.262 "data_size": 7936 00:15:47.262 }, 00:15:47.262 { 00:15:47.262 "name": "BaseBdev2", 00:15:47.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.262 "is_configured": false, 00:15:47.262 "data_offset": 0, 00:15:47.262 "data_size": 0 00:15:47.262 } 00:15:47.262 ] 00:15:47.262 }' 00:15:47.262 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.262 01:35:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.521 [2024-10-09 01:35:46.337340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.521 [2024-10-09 01:35:46.337913] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.521 [2024-10-09 01:35:46.338086] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:47.521 BaseBdev2 00:15:47.521 [2024-10-09 01:35:46.338919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:47.521 [2024-10-09 01:35:46.339387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.521 [2024-10-09 01:35:46.339542] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:47.521 [2024-10-09 01:35:46.339941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.521 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.521 [ 00:15:47.521 { 00:15:47.521 "name": "BaseBdev2", 00:15:47.521 "aliases": [ 00:15:47.521 "a38b0c0e-a098-4dad-9b20-6c7d5204cb06" 00:15:47.521 ], 00:15:47.521 "product_name": "Malloc disk", 00:15:47.521 "block_size": 4096, 00:15:47.521 "num_blocks": 8192, 00:15:47.521 "uuid": "a38b0c0e-a098-4dad-9b20-6c7d5204cb06", 00:15:47.521 "assigned_rate_limits": { 00:15:47.521 "rw_ios_per_sec": 0, 00:15:47.521 "rw_mbytes_per_sec": 0, 00:15:47.521 "r_mbytes_per_sec": 0, 00:15:47.521 "w_mbytes_per_sec": 0 00:15:47.521 }, 00:15:47.521 "claimed": true, 00:15:47.521 "claim_type": "exclusive_write", 00:15:47.521 "zoned": false, 00:15:47.521 "supported_io_types": { 00:15:47.521 "read": true, 00:15:47.521 "write": true, 00:15:47.521 "unmap": true, 00:15:47.521 "flush": true, 00:15:47.521 "reset": true, 00:15:47.521 "nvme_admin": false, 00:15:47.521 "nvme_io": false, 00:15:47.521 "nvme_io_md": false, 00:15:47.521 "write_zeroes": true, 00:15:47.521 "zcopy": true, 00:15:47.521 "get_zone_info": false, 00:15:47.521 "zone_management": false, 00:15:47.521 "zone_append": false, 00:15:47.521 "compare": false, 00:15:47.521 "compare_and_write": false, 00:15:47.521 "abort": true, 00:15:47.521 "seek_hole": false, 00:15:47.521 "seek_data": false, 00:15:47.521 "copy": true, 00:15:47.521 "nvme_iov_md": false 00:15:47.521 }, 00:15:47.521 "memory_domains": [ 00:15:47.522 { 00:15:47.522 "dma_device_id": "system", 00:15:47.522 "dma_device_type": 1 00:15:47.522 }, 00:15:47.522 { 00:15:47.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.522 "dma_device_type": 2 00:15:47.522 } 00:15:47.522 ], 00:15:47.522 "driver_specific": {} 00:15:47.522 } 00:15:47.522 ] 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:47.522 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.781 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.781 "name": "Existed_Raid", 00:15:47.781 "uuid": "17ca72ad-bff5-4839-8e1e-e8bf83e02738", 00:15:47.781 "strip_size_kb": 0, 00:15:47.781 "state": "online", 00:15:47.781 "raid_level": "raid1", 00:15:47.781 "superblock": true, 00:15:47.781 "num_base_bdevs": 2, 00:15:47.781 "num_base_bdevs_discovered": 2, 00:15:47.781 "num_base_bdevs_operational": 2, 00:15:47.781 "base_bdevs_list": [ 00:15:47.781 { 00:15:47.781 "name": "BaseBdev1", 00:15:47.781 "uuid": "4ed646fd-48e1-4805-ac54-973b9b99a8ee", 00:15:47.781 "is_configured": true, 00:15:47.781 "data_offset": 256, 00:15:47.781 "data_size": 7936 00:15:47.781 }, 00:15:47.781 { 00:15:47.781 "name": "BaseBdev2", 00:15:47.781 "uuid": "a38b0c0e-a098-4dad-9b20-6c7d5204cb06", 00:15:47.781 "is_configured": true, 00:15:47.781 "data_offset": 256, 00:15:47.781 "data_size": 7936 00:15:47.781 } 00:15:47.781 ] 00:15:47.781 }' 00:15:47.781 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.781 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.041 [2024-10-09 01:35:46.833712] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.041 "name": "Existed_Raid", 00:15:48.041 "aliases": [ 00:15:48.041 "17ca72ad-bff5-4839-8e1e-e8bf83e02738" 00:15:48.041 ], 00:15:48.041 "product_name": "Raid Volume", 00:15:48.041 "block_size": 4096, 00:15:48.041 "num_blocks": 7936, 00:15:48.041 "uuid": "17ca72ad-bff5-4839-8e1e-e8bf83e02738", 00:15:48.041 "assigned_rate_limits": { 00:15:48.041 "rw_ios_per_sec": 0, 00:15:48.041 "rw_mbytes_per_sec": 0, 00:15:48.041 "r_mbytes_per_sec": 0, 00:15:48.041 "w_mbytes_per_sec": 0 00:15:48.041 }, 00:15:48.041 "claimed": false, 00:15:48.041 "zoned": false, 00:15:48.041 "supported_io_types": { 00:15:48.041 "read": true, 00:15:48.041 "write": true, 00:15:48.041 "unmap": false, 00:15:48.041 "flush": false, 00:15:48.041 "reset": true, 00:15:48.041 "nvme_admin": false, 00:15:48.041 "nvme_io": false, 00:15:48.041 "nvme_io_md": false, 00:15:48.041 "write_zeroes": true, 00:15:48.041 "zcopy": false, 00:15:48.041 "get_zone_info": false, 00:15:48.041 "zone_management": false, 00:15:48.041 "zone_append": false, 00:15:48.041 "compare": false, 00:15:48.041 "compare_and_write": false, 00:15:48.041 "abort": false, 00:15:48.041 "seek_hole": false, 00:15:48.041 "seek_data": false, 00:15:48.041 "copy": false, 00:15:48.041 "nvme_iov_md": false 00:15:48.041 }, 00:15:48.041 "memory_domains": [ 00:15:48.041 { 00:15:48.041 "dma_device_id": "system", 00:15:48.041 "dma_device_type": 1 00:15:48.041 }, 00:15:48.041 { 00:15:48.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.041 "dma_device_type": 2 00:15:48.041 }, 00:15:48.041 { 00:15:48.041 "dma_device_id": "system", 00:15:48.041 "dma_device_type": 1 00:15:48.041 }, 00:15:48.041 { 00:15:48.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.041 "dma_device_type": 2 00:15:48.041 } 00:15:48.041 ], 00:15:48.041 "driver_specific": { 00:15:48.041 "raid": { 00:15:48.041 "uuid": "17ca72ad-bff5-4839-8e1e-e8bf83e02738", 00:15:48.041 "strip_size_kb": 0, 00:15:48.041 "state": "online", 00:15:48.041 "raid_level": "raid1", 00:15:48.041 "superblock": true, 00:15:48.041 "num_base_bdevs": 2, 00:15:48.041 "num_base_bdevs_discovered": 2, 00:15:48.041 "num_base_bdevs_operational": 2, 00:15:48.041 "base_bdevs_list": [ 00:15:48.041 { 00:15:48.041 "name": "BaseBdev1", 00:15:48.041 "uuid": "4ed646fd-48e1-4805-ac54-973b9b99a8ee", 00:15:48.041 "is_configured": true, 00:15:48.041 "data_offset": 256, 00:15:48.041 "data_size": 7936 00:15:48.041 }, 00:15:48.041 { 00:15:48.041 "name": "BaseBdev2", 00:15:48.041 "uuid": "a38b0c0e-a098-4dad-9b20-6c7d5204cb06", 00:15:48.041 "is_configured": true, 00:15:48.041 "data_offset": 256, 00:15:48.041 "data_size": 7936 00:15:48.041 } 00:15:48.041 ] 00:15:48.041 } 00:15:48.041 } 00:15:48.041 }' 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:48.041 BaseBdev2' 00:15:48.041 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.301 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:48.301 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.301 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:48.301 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.301 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.301 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.301 01:35:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.301 [2024-10-09 01:35:47.081583] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.301 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.301 "name": "Existed_Raid", 00:15:48.301 "uuid": "17ca72ad-bff5-4839-8e1e-e8bf83e02738", 00:15:48.301 "strip_size_kb": 0, 00:15:48.301 "state": "online", 00:15:48.301 "raid_level": "raid1", 00:15:48.301 "superblock": true, 00:15:48.301 "num_base_bdevs": 2, 00:15:48.301 "num_base_bdevs_discovered": 1, 00:15:48.301 "num_base_bdevs_operational": 1, 00:15:48.301 "base_bdevs_list": [ 00:15:48.301 { 00:15:48.301 "name": null, 00:15:48.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.302 "is_configured": false, 00:15:48.302 "data_offset": 0, 00:15:48.302 "data_size": 7936 00:15:48.302 }, 00:15:48.302 { 00:15:48.302 "name": "BaseBdev2", 00:15:48.302 "uuid": "a38b0c0e-a098-4dad-9b20-6c7d5204cb06", 00:15:48.302 "is_configured": true, 00:15:48.302 "data_offset": 256, 00:15:48.302 "data_size": 7936 00:15:48.302 } 00:15:48.302 ] 00:15:48.302 }' 00:15:48.302 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.302 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.871 [2024-10-09 01:35:47.594408] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:48.871 [2024-10-09 01:35:47.594510] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.871 [2024-10-09 01:35:47.615403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.871 [2024-10-09 01:35:47.615451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.871 [2024-10-09 01:35:47.615467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 97480 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 97480 ']' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 97480 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97480 00:15:48.871 killing process with pid 97480 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97480' 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 97480 00:15:48.871 [2024-10-09 01:35:47.713460] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.871 01:35:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 97480 00:15:48.871 [2024-10-09 01:35:47.715034] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.441 01:35:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:49.441 00:15:49.441 real 0m4.126s 00:15:49.441 user 0m6.283s 00:15:49.441 sys 0m0.916s 00:15:49.441 01:35:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.441 01:35:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.441 ************************************ 00:15:49.441 END TEST raid_state_function_test_sb_4k 00:15:49.441 ************************************ 00:15:49.441 01:35:48 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:49.441 01:35:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:49.441 01:35:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.441 01:35:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.441 ************************************ 00:15:49.441 START TEST raid_superblock_test_4k 00:15:49.441 ************************************ 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=97721 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 97721 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 97721 ']' 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.441 01:35:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:49.441 [2024-10-09 01:35:48.261898] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:49.441 [2024-10-09 01:35:48.262047] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97721 ] 00:15:49.701 [2024-10-09 01:35:48.394173] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:49.701 [2024-10-09 01:35:48.422783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.701 [2024-10-09 01:35:48.492870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.701 [2024-10-09 01:35:48.568855] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.701 [2024-10-09 01:35:48.568892] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.273 malloc1 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.273 [2024-10-09 01:35:49.096553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.273 [2024-10-09 01:35:49.096627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.273 [2024-10-09 01:35:49.096651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:50.273 [2024-10-09 01:35:49.096666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.273 [2024-10-09 01:35:49.099088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.273 [2024-10-09 01:35:49.099122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.273 pt1 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.273 malloc2 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.273 [2024-10-09 01:35:49.147916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.273 [2024-10-09 01:35:49.148016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.273 [2024-10-09 01:35:49.148060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.273 [2024-10-09 01:35:49.148083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.273 [2024-10-09 01:35:49.153239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.273 [2024-10-09 01:35:49.153307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.273 pt2 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.273 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.273 [2024-10-09 01:35:49.161649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:50.544 [2024-10-09 01:35:49.164912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.544 [2024-10-09 01:35:49.165101] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:50.544 [2024-10-09 01:35:49.165118] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:50.544 [2024-10-09 01:35:49.165449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:50.544 [2024-10-09 01:35:49.165632] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:50.544 [2024-10-09 01:35:49.165654] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:50.544 [2024-10-09 01:35:49.165848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.544 "name": "raid_bdev1", 00:15:50.544 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:50.544 "strip_size_kb": 0, 00:15:50.544 "state": "online", 00:15:50.544 "raid_level": "raid1", 00:15:50.544 "superblock": true, 00:15:50.544 "num_base_bdevs": 2, 00:15:50.544 "num_base_bdevs_discovered": 2, 00:15:50.544 "num_base_bdevs_operational": 2, 00:15:50.544 "base_bdevs_list": [ 00:15:50.544 { 00:15:50.544 "name": "pt1", 00:15:50.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.544 "is_configured": true, 00:15:50.544 "data_offset": 256, 00:15:50.544 "data_size": 7936 00:15:50.544 }, 00:15:50.544 { 00:15:50.544 "name": "pt2", 00:15:50.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.544 "is_configured": true, 00:15:50.544 "data_offset": 256, 00:15:50.544 "data_size": 7936 00:15:50.544 } 00:15:50.544 ] 00:15:50.544 }' 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.544 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.821 [2024-10-09 01:35:49.562217] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.821 "name": "raid_bdev1", 00:15:50.821 "aliases": [ 00:15:50.821 "5d81b486-0899-42ca-9ab7-caeb313e4554" 00:15:50.821 ], 00:15:50.821 "product_name": "Raid Volume", 00:15:50.821 "block_size": 4096, 00:15:50.821 "num_blocks": 7936, 00:15:50.821 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:50.821 "assigned_rate_limits": { 00:15:50.821 "rw_ios_per_sec": 0, 00:15:50.821 "rw_mbytes_per_sec": 0, 00:15:50.821 "r_mbytes_per_sec": 0, 00:15:50.821 "w_mbytes_per_sec": 0 00:15:50.821 }, 00:15:50.821 "claimed": false, 00:15:50.821 "zoned": false, 00:15:50.821 "supported_io_types": { 00:15:50.821 "read": true, 00:15:50.821 "write": true, 00:15:50.821 "unmap": false, 00:15:50.821 "flush": false, 00:15:50.821 "reset": true, 00:15:50.821 "nvme_admin": false, 00:15:50.821 "nvme_io": false, 00:15:50.821 "nvme_io_md": false, 00:15:50.821 "write_zeroes": true, 00:15:50.821 "zcopy": false, 00:15:50.821 "get_zone_info": false, 00:15:50.821 "zone_management": false, 00:15:50.821 "zone_append": false, 00:15:50.821 "compare": false, 00:15:50.821 "compare_and_write": false, 00:15:50.821 "abort": false, 00:15:50.821 "seek_hole": false, 00:15:50.821 "seek_data": false, 00:15:50.821 "copy": false, 00:15:50.821 "nvme_iov_md": false 00:15:50.821 }, 00:15:50.821 "memory_domains": [ 00:15:50.821 { 00:15:50.821 "dma_device_id": "system", 00:15:50.821 "dma_device_type": 1 00:15:50.821 }, 00:15:50.821 { 00:15:50.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.821 "dma_device_type": 2 00:15:50.821 }, 00:15:50.821 { 00:15:50.821 "dma_device_id": "system", 00:15:50.821 "dma_device_type": 1 00:15:50.821 }, 00:15:50.821 { 00:15:50.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.821 "dma_device_type": 2 00:15:50.821 } 00:15:50.821 ], 00:15:50.821 "driver_specific": { 00:15:50.821 "raid": { 00:15:50.821 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:50.821 "strip_size_kb": 0, 00:15:50.821 "state": "online", 00:15:50.821 "raid_level": "raid1", 00:15:50.821 "superblock": true, 00:15:50.821 "num_base_bdevs": 2, 00:15:50.821 "num_base_bdevs_discovered": 2, 00:15:50.821 "num_base_bdevs_operational": 2, 00:15:50.821 "base_bdevs_list": [ 00:15:50.821 { 00:15:50.821 "name": "pt1", 00:15:50.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.821 "is_configured": true, 00:15:50.821 "data_offset": 256, 00:15:50.821 "data_size": 7936 00:15:50.821 }, 00:15:50.821 { 00:15:50.821 "name": "pt2", 00:15:50.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.821 "is_configured": true, 00:15:50.821 "data_offset": 256, 00:15:50.821 "data_size": 7936 00:15:50.821 } 00:15:50.821 ] 00:15:50.821 } 00:15:50.821 } 00:15:50.821 }' 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:50.821 pt2' 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:50.821 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.081 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:51.081 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:51.081 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.081 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:51.081 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.081 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.081 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.081 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.082 [2024-10-09 01:35:49.778171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5d81b486-0899-42ca-9ab7-caeb313e4554 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5d81b486-0899-42ca-9ab7-caeb313e4554 ']' 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.082 [2024-10-09 01:35:49.825947] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.082 [2024-10-09 01:35:49.825975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.082 [2024-10-09 01:35:49.826072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.082 [2024-10-09 01:35:49.826123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.082 [2024-10-09 01:35:49.826145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.082 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.082 [2024-10-09 01:35:49.966016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:51.082 [2024-10-09 01:35:49.968118] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:51.082 [2024-10-09 01:35:49.968177] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:51.082 [2024-10-09 01:35:49.968230] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:51.082 [2024-10-09 01:35:49.968247] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.082 [2024-10-09 01:35:49.968257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:51.082 request: 00:15:51.082 { 00:15:51.082 "name": "raid_bdev1", 00:15:51.082 "raid_level": "raid1", 00:15:51.082 "base_bdevs": [ 00:15:51.082 "malloc1", 00:15:51.082 "malloc2" 00:15:51.082 ], 00:15:51.082 "superblock": false, 00:15:51.082 "method": "bdev_raid_create", 00:15:51.082 "req_id": 1 00:15:51.082 } 00:15:51.082 Got JSON-RPC error response 00:15:51.082 response: 00:15:51.082 { 00:15:51.082 "code": -17, 00:15:51.082 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:51.082 } 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.342 01:35:49 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.342 [2024-10-09 01:35:50.018033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.342 [2024-10-09 01:35:50.018089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.342 [2024-10-09 01:35:50.018102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:51.342 [2024-10-09 01:35:50.018115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.342 [2024-10-09 01:35:50.020268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.342 [2024-10-09 01:35:50.020303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.342 [2024-10-09 01:35:50.020354] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.342 [2024-10-09 01:35:50.020393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.342 pt1 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.342 "name": "raid_bdev1", 00:15:51.342 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:51.342 "strip_size_kb": 0, 00:15:51.342 "state": "configuring", 00:15:51.342 "raid_level": "raid1", 00:15:51.342 "superblock": true, 00:15:51.342 "num_base_bdevs": 2, 00:15:51.342 "num_base_bdevs_discovered": 1, 00:15:51.342 "num_base_bdevs_operational": 2, 00:15:51.342 "base_bdevs_list": [ 00:15:51.342 { 00:15:51.342 "name": "pt1", 00:15:51.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.342 "is_configured": true, 00:15:51.342 "data_offset": 256, 00:15:51.342 "data_size": 7936 00:15:51.342 }, 00:15:51.342 { 00:15:51.342 "name": null, 00:15:51.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.342 "is_configured": false, 00:15:51.342 "data_offset": 256, 00:15:51.342 "data_size": 7936 00:15:51.342 } 00:15:51.342 ] 00:15:51.342 }' 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.342 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.602 [2024-10-09 01:35:50.454133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.602 [2024-10-09 01:35:50.454180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.602 [2024-10-09 01:35:50.454195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:51.602 [2024-10-09 01:35:50.454205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.602 [2024-10-09 01:35:50.454510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.602 [2024-10-09 01:35:50.454548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.602 [2024-10-09 01:35:50.454597] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.602 [2024-10-09 01:35:50.454617] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.602 [2024-10-09 01:35:50.454700] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.602 [2024-10-09 01:35:50.454713] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:51.602 [2024-10-09 01:35:50.454928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:51.602 [2024-10-09 01:35:50.455054] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.602 [2024-10-09 01:35:50.455068] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:51.602 [2024-10-09 01:35:50.455157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.602 pt2 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:51.602 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.861 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.861 "name": "raid_bdev1", 00:15:51.861 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:51.861 "strip_size_kb": 0, 00:15:51.861 "state": "online", 00:15:51.861 "raid_level": "raid1", 00:15:51.861 "superblock": true, 00:15:51.861 "num_base_bdevs": 2, 00:15:51.861 "num_base_bdevs_discovered": 2, 00:15:51.861 "num_base_bdevs_operational": 2, 00:15:51.861 "base_bdevs_list": [ 00:15:51.861 { 00:15:51.861 "name": "pt1", 00:15:51.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:51.861 "is_configured": true, 00:15:51.861 "data_offset": 256, 00:15:51.861 "data_size": 7936 00:15:51.861 }, 00:15:51.861 { 00:15:51.861 "name": "pt2", 00:15:51.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.861 "is_configured": true, 00:15:51.861 "data_offset": 256, 00:15:51.861 "data_size": 7936 00:15:51.862 } 00:15:51.862 ] 00:15:51.862 }' 00:15:51.862 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.862 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.121 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:52.121 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:52.121 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 [2024-10-09 01:35:50.894473] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:52.122 "name": "raid_bdev1", 00:15:52.122 "aliases": [ 00:15:52.122 "5d81b486-0899-42ca-9ab7-caeb313e4554" 00:15:52.122 ], 00:15:52.122 "product_name": "Raid Volume", 00:15:52.122 "block_size": 4096, 00:15:52.122 "num_blocks": 7936, 00:15:52.122 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:52.122 "assigned_rate_limits": { 00:15:52.122 "rw_ios_per_sec": 0, 00:15:52.122 "rw_mbytes_per_sec": 0, 00:15:52.122 "r_mbytes_per_sec": 0, 00:15:52.122 "w_mbytes_per_sec": 0 00:15:52.122 }, 00:15:52.122 "claimed": false, 00:15:52.122 "zoned": false, 00:15:52.122 "supported_io_types": { 00:15:52.122 "read": true, 00:15:52.122 "write": true, 00:15:52.122 "unmap": false, 00:15:52.122 "flush": false, 00:15:52.122 "reset": true, 00:15:52.122 "nvme_admin": false, 00:15:52.122 "nvme_io": false, 00:15:52.122 "nvme_io_md": false, 00:15:52.122 "write_zeroes": true, 00:15:52.122 "zcopy": false, 00:15:52.122 "get_zone_info": false, 00:15:52.122 "zone_management": false, 00:15:52.122 "zone_append": false, 00:15:52.122 "compare": false, 00:15:52.122 "compare_and_write": false, 00:15:52.122 "abort": false, 00:15:52.122 "seek_hole": false, 00:15:52.122 "seek_data": false, 00:15:52.122 "copy": false, 00:15:52.122 "nvme_iov_md": false 00:15:52.122 }, 00:15:52.122 "memory_domains": [ 00:15:52.122 { 00:15:52.122 "dma_device_id": "system", 00:15:52.122 "dma_device_type": 1 00:15:52.122 }, 00:15:52.122 { 00:15:52.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.122 "dma_device_type": 2 00:15:52.122 }, 00:15:52.122 { 00:15:52.122 "dma_device_id": "system", 00:15:52.122 "dma_device_type": 1 00:15:52.122 }, 00:15:52.122 { 00:15:52.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.122 "dma_device_type": 2 00:15:52.122 } 00:15:52.122 ], 00:15:52.122 "driver_specific": { 00:15:52.122 "raid": { 00:15:52.122 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:52.122 "strip_size_kb": 0, 00:15:52.122 "state": "online", 00:15:52.122 "raid_level": "raid1", 00:15:52.122 "superblock": true, 00:15:52.122 "num_base_bdevs": 2, 00:15:52.122 "num_base_bdevs_discovered": 2, 00:15:52.122 "num_base_bdevs_operational": 2, 00:15:52.122 "base_bdevs_list": [ 00:15:52.122 { 00:15:52.122 "name": "pt1", 00:15:52.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:52.122 "is_configured": true, 00:15:52.122 "data_offset": 256, 00:15:52.122 "data_size": 7936 00:15:52.122 }, 00:15:52.122 { 00:15:52.122 "name": "pt2", 00:15:52.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.122 "is_configured": true, 00:15:52.122 "data_offset": 256, 00:15:52.122 "data_size": 7936 00:15:52.122 } 00:15:52.122 ] 00:15:52.122 } 00:15:52.122 } 00:15:52.122 }' 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:52.122 pt2' 00:15:52.122 01:35:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.122 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:52.122 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.122 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:52.122 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.122 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:52.382 [2024-10-09 01:35:51.102567] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5d81b486-0899-42ca-9ab7-caeb313e4554 '!=' 5d81b486-0899-42ca-9ab7-caeb313e4554 ']' 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.382 [2024-10-09 01:35:51.150370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.382 "name": "raid_bdev1", 00:15:52.382 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:52.382 "strip_size_kb": 0, 00:15:52.382 "state": "online", 00:15:52.382 "raid_level": "raid1", 00:15:52.382 "superblock": true, 00:15:52.382 "num_base_bdevs": 2, 00:15:52.382 "num_base_bdevs_discovered": 1, 00:15:52.382 "num_base_bdevs_operational": 1, 00:15:52.382 "base_bdevs_list": [ 00:15:52.382 { 00:15:52.382 "name": null, 00:15:52.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.382 "is_configured": false, 00:15:52.382 "data_offset": 0, 00:15:52.382 "data_size": 7936 00:15:52.382 }, 00:15:52.382 { 00:15:52.382 "name": "pt2", 00:15:52.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.382 "is_configured": true, 00:15:52.382 "data_offset": 256, 00:15:52.382 "data_size": 7936 00:15:52.382 } 00:15:52.382 ] 00:15:52.382 }' 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.382 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.951 [2024-10-09 01:35:51.606468] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.951 [2024-10-09 01:35:51.606491] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.951 [2024-10-09 01:35:51.606550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.951 [2024-10-09 01:35:51.606582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.951 [2024-10-09 01:35:51.606591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.951 [2024-10-09 01:35:51.682487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:52.951 [2024-10-09 01:35:51.682541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.951 [2024-10-09 01:35:51.682554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:52.951 [2024-10-09 01:35:51.682563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.951 [2024-10-09 01:35:51.684910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.951 [2024-10-09 01:35:51.684948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:52.951 [2024-10-09 01:35:51.685002] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:52.951 [2024-10-09 01:35:51.685029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.951 [2024-10-09 01:35:51.685086] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:52.951 [2024-10-09 01:35:51.685096] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:52.951 [2024-10-09 01:35:51.685294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:52.951 [2024-10-09 01:35:51.685415] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:52.951 [2024-10-09 01:35:51.685426] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:52.951 [2024-10-09 01:35:51.685516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.951 pt2 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.951 "name": "raid_bdev1", 00:15:52.951 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:52.951 "strip_size_kb": 0, 00:15:52.951 "state": "online", 00:15:52.951 "raid_level": "raid1", 00:15:52.951 "superblock": true, 00:15:52.951 "num_base_bdevs": 2, 00:15:52.951 "num_base_bdevs_discovered": 1, 00:15:52.951 "num_base_bdevs_operational": 1, 00:15:52.951 "base_bdevs_list": [ 00:15:52.951 { 00:15:52.951 "name": null, 00:15:52.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.951 "is_configured": false, 00:15:52.951 "data_offset": 256, 00:15:52.951 "data_size": 7936 00:15:52.951 }, 00:15:52.951 { 00:15:52.951 "name": "pt2", 00:15:52.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.951 "is_configured": true, 00:15:52.951 "data_offset": 256, 00:15:52.951 "data_size": 7936 00:15:52.951 } 00:15:52.951 ] 00:15:52.951 }' 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.951 01:35:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.520 [2024-10-09 01:35:52.126598] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.520 [2024-10-09 01:35:52.126622] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.520 [2024-10-09 01:35:52.126663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.520 [2024-10-09 01:35:52.126697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.520 [2024-10-09 01:35:52.126704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.520 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.520 [2024-10-09 01:35:52.170622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.520 [2024-10-09 01:35:52.170660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.520 [2024-10-09 01:35:52.170682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:15:53.520 [2024-10-09 01:35:52.170692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.520 [2024-10-09 01:35:52.172958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.520 [2024-10-09 01:35:52.172989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.520 [2024-10-09 01:35:52.173042] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:53.520 [2024-10-09 01:35:52.173066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.521 [2024-10-09 01:35:52.173147] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:53.521 [2024-10-09 01:35:52.173156] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.521 [2024-10-09 01:35:52.173184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:53.521 [2024-10-09 01:35:52.173223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.521 [2024-10-09 01:35:52.173284] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:53.521 [2024-10-09 01:35:52.173297] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:53.521 [2024-10-09 01:35:52.173506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:53.521 [2024-10-09 01:35:52.173631] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:53.521 [2024-10-09 01:35:52.173644] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:53.521 [2024-10-09 01:35:52.173744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.521 pt1 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.521 "name": "raid_bdev1", 00:15:53.521 "uuid": "5d81b486-0899-42ca-9ab7-caeb313e4554", 00:15:53.521 "strip_size_kb": 0, 00:15:53.521 "state": "online", 00:15:53.521 "raid_level": "raid1", 00:15:53.521 "superblock": true, 00:15:53.521 "num_base_bdevs": 2, 00:15:53.521 "num_base_bdevs_discovered": 1, 00:15:53.521 "num_base_bdevs_operational": 1, 00:15:53.521 "base_bdevs_list": [ 00:15:53.521 { 00:15:53.521 "name": null, 00:15:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.521 "is_configured": false, 00:15:53.521 "data_offset": 256, 00:15:53.521 "data_size": 7936 00:15:53.521 }, 00:15:53.521 { 00:15:53.521 "name": "pt2", 00:15:53.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.521 "is_configured": true, 00:15:53.521 "data_offset": 256, 00:15:53.521 "data_size": 7936 00:15:53.521 } 00:15:53.521 ] 00:15:53.521 }' 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.521 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:53.780 [2024-10-09 01:35:52.630913] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5d81b486-0899-42ca-9ab7-caeb313e4554 '!=' 5d81b486-0899-42ca-9ab7-caeb313e4554 ']' 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 97721 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 97721 ']' 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 97721 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.780 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97721 00:15:54.040 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.040 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.040 killing process with pid 97721 00:15:54.040 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97721' 00:15:54.040 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 97721 00:15:54.040 [2024-10-09 01:35:52.696967] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.040 [2024-10-09 01:35:52.697028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.040 [2024-10-09 01:35:52.697060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.040 [2024-10-09 01:35:52.697071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:54.040 01:35:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 97721 00:15:54.040 [2024-10-09 01:35:52.737793] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:54.300 01:35:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:54.300 00:15:54.300 real 0m4.943s 00:15:54.300 user 0m7.873s 00:15:54.300 sys 0m1.085s 00:15:54.300 01:35:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.300 01:35:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.300 ************************************ 00:15:54.300 END TEST raid_superblock_test_4k 00:15:54.300 ************************************ 00:15:54.300 01:35:53 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:54.300 01:35:53 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:54.300 01:35:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:54.300 01:35:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.300 01:35:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.300 ************************************ 00:15:54.300 START TEST raid_rebuild_test_sb_4k 00:15:54.300 ************************************ 00:15:54.300 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:54.300 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:54.300 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:54.300 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:54.300 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:54.300 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=98038 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 98038 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 98038 ']' 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.560 01:35:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:54.560 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:54.560 Zero copy mechanism will not be used. 00:15:54.560 [2024-10-09 01:35:53.282486] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:54.560 [2024-10-09 01:35:53.282638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98038 ] 00:15:54.560 [2024-10-09 01:35:53.413612] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:54.560 [2024-10-09 01:35:53.441902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.820 [2024-10-09 01:35:53.515125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.820 [2024-10-09 01:35:53.591689] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.820 [2024-10-09 01:35:53.591743] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:55.389 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.389 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 BaseBdev1_malloc 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 [2024-10-09 01:35:54.131007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:55.390 [2024-10-09 01:35:54.131079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.390 [2024-10-09 01:35:54.131112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:55.390 [2024-10-09 01:35:54.131131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.390 [2024-10-09 01:35:54.133475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.390 [2024-10-09 01:35:54.133511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:55.390 BaseBdev1 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 BaseBdev2_malloc 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 [2024-10-09 01:35:54.185987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:55.390 [2024-10-09 01:35:54.186093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.390 [2024-10-09 01:35:54.186136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:55.390 [2024-10-09 01:35:54.186164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.390 [2024-10-09 01:35:54.191207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.390 [2024-10-09 01:35:54.191280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:55.390 BaseBdev2 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 spare_malloc 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 spare_delay 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 [2024-10-09 01:35:54.236080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:55.390 [2024-10-09 01:35:54.236135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.390 [2024-10-09 01:35:54.236152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:55.390 [2024-10-09 01:35:54.236163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.390 [2024-10-09 01:35:54.238513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.390 [2024-10-09 01:35:54.238559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:55.390 spare 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 [2024-10-09 01:35:54.248137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.390 [2024-10-09 01:35:54.250189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:55.390 [2024-10-09 01:35:54.250337] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:55.390 [2024-10-09 01:35:54.250353] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:55.390 [2024-10-09 01:35:54.250615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:55.390 [2024-10-09 01:35:54.250758] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:55.390 [2024-10-09 01:35:54.250776] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:55.390 [2024-10-09 01:35:54.250886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.649 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.649 "name": "raid_bdev1", 00:15:55.649 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:15:55.649 "strip_size_kb": 0, 00:15:55.649 "state": "online", 00:15:55.649 "raid_level": "raid1", 00:15:55.649 "superblock": true, 00:15:55.649 "num_base_bdevs": 2, 00:15:55.649 "num_base_bdevs_discovered": 2, 00:15:55.649 "num_base_bdevs_operational": 2, 00:15:55.649 "base_bdevs_list": [ 00:15:55.649 { 00:15:55.649 "name": "BaseBdev1", 00:15:55.649 "uuid": "3103ddcc-22ad-5175-9e58-a9efae7e3a9c", 00:15:55.649 "is_configured": true, 00:15:55.649 "data_offset": 256, 00:15:55.649 "data_size": 7936 00:15:55.649 }, 00:15:55.649 { 00:15:55.649 "name": "BaseBdev2", 00:15:55.649 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:15:55.649 "is_configured": true, 00:15:55.649 "data_offset": 256, 00:15:55.649 "data_size": 7936 00:15:55.649 } 00:15:55.649 ] 00:15:55.649 }' 00:15:55.649 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.649 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:55.909 [2024-10-09 01:35:54.720450] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:55.909 01:35:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:56.168 [2024-10-09 01:35:54.964354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:56.168 /dev/nbd0 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.168 1+0 records in 00:15:56.168 1+0 records out 00:15:56.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485816 s, 8.4 MB/s 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:56.168 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:56.737 7936+0 records in 00:15:56.737 7936+0 records out 00:15:56.737 32505856 bytes (33 MB, 31 MiB) copied, 0.588724 s, 55.2 MB/s 00:15:56.737 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:56.737 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.737 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:56.737 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.737 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.997 [2024-10-09 01:35:55.839671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.997 [2024-10-09 01:35:55.851773] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.997 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.255 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.256 "name": "raid_bdev1", 00:15:57.256 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:15:57.256 "strip_size_kb": 0, 00:15:57.256 "state": "online", 00:15:57.256 "raid_level": "raid1", 00:15:57.256 "superblock": true, 00:15:57.256 "num_base_bdevs": 2, 00:15:57.256 "num_base_bdevs_discovered": 1, 00:15:57.256 "num_base_bdevs_operational": 1, 00:15:57.256 "base_bdevs_list": [ 00:15:57.256 { 00:15:57.256 "name": null, 00:15:57.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.256 "is_configured": false, 00:15:57.256 "data_offset": 0, 00:15:57.256 "data_size": 7936 00:15:57.256 }, 00:15:57.256 { 00:15:57.256 "name": "BaseBdev2", 00:15:57.256 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:15:57.256 "is_configured": true, 00:15:57.256 "data_offset": 256, 00:15:57.256 "data_size": 7936 00:15:57.256 } 00:15:57.256 ] 00:15:57.256 }' 00:15:57.256 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.256 01:35:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.514 01:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.514 01:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.514 01:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.514 [2024-10-09 01:35:56.271861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.514 [2024-10-09 01:35:56.279087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:15:57.514 01:35:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.514 01:35:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:57.514 [2024-10-09 01:35:56.281199] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.452 "name": "raid_bdev1", 00:15:58.452 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:15:58.452 "strip_size_kb": 0, 00:15:58.452 "state": "online", 00:15:58.452 "raid_level": "raid1", 00:15:58.452 "superblock": true, 00:15:58.452 "num_base_bdevs": 2, 00:15:58.452 "num_base_bdevs_discovered": 2, 00:15:58.452 "num_base_bdevs_operational": 2, 00:15:58.452 "process": { 00:15:58.452 "type": "rebuild", 00:15:58.452 "target": "spare", 00:15:58.452 "progress": { 00:15:58.452 "blocks": 2560, 00:15:58.452 "percent": 32 00:15:58.452 } 00:15:58.452 }, 00:15:58.452 "base_bdevs_list": [ 00:15:58.452 { 00:15:58.452 "name": "spare", 00:15:58.452 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:15:58.452 "is_configured": true, 00:15:58.452 "data_offset": 256, 00:15:58.452 "data_size": 7936 00:15:58.452 }, 00:15:58.452 { 00:15:58.452 "name": "BaseBdev2", 00:15:58.452 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:15:58.452 "is_configured": true, 00:15:58.452 "data_offset": 256, 00:15:58.452 "data_size": 7936 00:15:58.452 } 00:15:58.452 ] 00:15:58.452 }' 00:15:58.452 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.712 [2024-10-09 01:35:57.443529] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.712 [2024-10-09 01:35:57.491417] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.712 [2024-10-09 01:35:57.491563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.712 [2024-10-09 01:35:57.491605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.712 [2024-10-09 01:35:57.491635] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.712 "name": "raid_bdev1", 00:15:58.712 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:15:58.712 "strip_size_kb": 0, 00:15:58.712 "state": "online", 00:15:58.712 "raid_level": "raid1", 00:15:58.712 "superblock": true, 00:15:58.712 "num_base_bdevs": 2, 00:15:58.712 "num_base_bdevs_discovered": 1, 00:15:58.712 "num_base_bdevs_operational": 1, 00:15:58.712 "base_bdevs_list": [ 00:15:58.712 { 00:15:58.712 "name": null, 00:15:58.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.712 "is_configured": false, 00:15:58.712 "data_offset": 0, 00:15:58.712 "data_size": 7936 00:15:58.712 }, 00:15:58.712 { 00:15:58.712 "name": "BaseBdev2", 00:15:58.712 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:15:58.712 "is_configured": true, 00:15:58.712 "data_offset": 256, 00:15:58.712 "data_size": 7936 00:15:58.712 } 00:15:58.712 ] 00:15:58.712 }' 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.712 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.280 "name": "raid_bdev1", 00:15:59.280 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:15:59.280 "strip_size_kb": 0, 00:15:59.280 "state": "online", 00:15:59.280 "raid_level": "raid1", 00:15:59.280 "superblock": true, 00:15:59.280 "num_base_bdevs": 2, 00:15:59.280 "num_base_bdevs_discovered": 1, 00:15:59.280 "num_base_bdevs_operational": 1, 00:15:59.280 "base_bdevs_list": [ 00:15:59.280 { 00:15:59.280 "name": null, 00:15:59.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.280 "is_configured": false, 00:15:59.280 "data_offset": 0, 00:15:59.280 "data_size": 7936 00:15:59.280 }, 00:15:59.280 { 00:15:59.280 "name": "BaseBdev2", 00:15:59.280 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:15:59.280 "is_configured": true, 00:15:59.280 "data_offset": 256, 00:15:59.280 "data_size": 7936 00:15:59.280 } 00:15:59.280 ] 00:15:59.280 }' 00:15:59.280 01:35:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.280 01:35:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.280 01:35:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.280 01:35:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.280 01:35:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.280 01:35:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.280 01:35:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.280 [2024-10-09 01:35:58.062233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.280 [2024-10-09 01:35:58.067518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d400 00:15:59.280 01:35:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.280 01:35:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:59.280 [2024-10-09 01:35:58.069644] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.219 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.478 "name": "raid_bdev1", 00:16:00.478 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:00.478 "strip_size_kb": 0, 00:16:00.478 "state": "online", 00:16:00.478 "raid_level": "raid1", 00:16:00.478 "superblock": true, 00:16:00.478 "num_base_bdevs": 2, 00:16:00.478 "num_base_bdevs_discovered": 2, 00:16:00.478 "num_base_bdevs_operational": 2, 00:16:00.478 "process": { 00:16:00.478 "type": "rebuild", 00:16:00.478 "target": "spare", 00:16:00.478 "progress": { 00:16:00.478 "blocks": 2560, 00:16:00.478 "percent": 32 00:16:00.478 } 00:16:00.478 }, 00:16:00.478 "base_bdevs_list": [ 00:16:00.478 { 00:16:00.478 "name": "spare", 00:16:00.478 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:00.478 "is_configured": true, 00:16:00.478 "data_offset": 256, 00:16:00.478 "data_size": 7936 00:16:00.478 }, 00:16:00.478 { 00:16:00.478 "name": "BaseBdev2", 00:16:00.478 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:00.478 "is_configured": true, 00:16:00.478 "data_offset": 256, 00:16:00.478 "data_size": 7936 00:16:00.478 } 00:16:00.478 ] 00:16:00.478 }' 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:00.478 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=575 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.478 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.478 "name": "raid_bdev1", 00:16:00.478 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:00.478 "strip_size_kb": 0, 00:16:00.478 "state": "online", 00:16:00.478 "raid_level": "raid1", 00:16:00.478 "superblock": true, 00:16:00.478 "num_base_bdevs": 2, 00:16:00.478 "num_base_bdevs_discovered": 2, 00:16:00.478 "num_base_bdevs_operational": 2, 00:16:00.478 "process": { 00:16:00.478 "type": "rebuild", 00:16:00.478 "target": "spare", 00:16:00.478 "progress": { 00:16:00.478 "blocks": 2816, 00:16:00.478 "percent": 35 00:16:00.478 } 00:16:00.478 }, 00:16:00.478 "base_bdevs_list": [ 00:16:00.478 { 00:16:00.479 "name": "spare", 00:16:00.479 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:00.479 "is_configured": true, 00:16:00.479 "data_offset": 256, 00:16:00.479 "data_size": 7936 00:16:00.479 }, 00:16:00.479 { 00:16:00.479 "name": "BaseBdev2", 00:16:00.479 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:00.479 "is_configured": true, 00:16:00.479 "data_offset": 256, 00:16:00.479 "data_size": 7936 00:16:00.479 } 00:16:00.479 ] 00:16:00.479 }' 00:16:00.479 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.479 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.479 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.738 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.738 01:35:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.675 "name": "raid_bdev1", 00:16:01.675 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:01.675 "strip_size_kb": 0, 00:16:01.675 "state": "online", 00:16:01.675 "raid_level": "raid1", 00:16:01.675 "superblock": true, 00:16:01.675 "num_base_bdevs": 2, 00:16:01.675 "num_base_bdevs_discovered": 2, 00:16:01.675 "num_base_bdevs_operational": 2, 00:16:01.675 "process": { 00:16:01.675 "type": "rebuild", 00:16:01.675 "target": "spare", 00:16:01.675 "progress": { 00:16:01.675 "blocks": 5888, 00:16:01.675 "percent": 74 00:16:01.675 } 00:16:01.675 }, 00:16:01.675 "base_bdevs_list": [ 00:16:01.675 { 00:16:01.675 "name": "spare", 00:16:01.675 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:01.675 "is_configured": true, 00:16:01.675 "data_offset": 256, 00:16:01.675 "data_size": 7936 00:16:01.675 }, 00:16:01.675 { 00:16:01.675 "name": "BaseBdev2", 00:16:01.675 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:01.675 "is_configured": true, 00:16:01.675 "data_offset": 256, 00:16:01.675 "data_size": 7936 00:16:01.675 } 00:16:01.675 ] 00:16:01.675 }' 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.675 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.676 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.676 01:36:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.613 [2024-10-09 01:36:01.194449] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:02.613 [2024-10-09 01:36:01.194562] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:02.613 [2024-10-09 01:36:01.194666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.873 "name": "raid_bdev1", 00:16:02.873 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:02.873 "strip_size_kb": 0, 00:16:02.873 "state": "online", 00:16:02.873 "raid_level": "raid1", 00:16:02.873 "superblock": true, 00:16:02.873 "num_base_bdevs": 2, 00:16:02.873 "num_base_bdevs_discovered": 2, 00:16:02.873 "num_base_bdevs_operational": 2, 00:16:02.873 "base_bdevs_list": [ 00:16:02.873 { 00:16:02.873 "name": "spare", 00:16:02.873 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:02.873 "is_configured": true, 00:16:02.873 "data_offset": 256, 00:16:02.873 "data_size": 7936 00:16:02.873 }, 00:16:02.873 { 00:16:02.873 "name": "BaseBdev2", 00:16:02.873 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:02.873 "is_configured": true, 00:16:02.873 "data_offset": 256, 00:16:02.873 "data_size": 7936 00:16:02.873 } 00:16:02.873 ] 00:16:02.873 }' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.873 "name": "raid_bdev1", 00:16:02.873 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:02.873 "strip_size_kb": 0, 00:16:02.873 "state": "online", 00:16:02.873 "raid_level": "raid1", 00:16:02.873 "superblock": true, 00:16:02.873 "num_base_bdevs": 2, 00:16:02.873 "num_base_bdevs_discovered": 2, 00:16:02.873 "num_base_bdevs_operational": 2, 00:16:02.873 "base_bdevs_list": [ 00:16:02.873 { 00:16:02.873 "name": "spare", 00:16:02.873 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:02.873 "is_configured": true, 00:16:02.873 "data_offset": 256, 00:16:02.873 "data_size": 7936 00:16:02.873 }, 00:16:02.873 { 00:16:02.873 "name": "BaseBdev2", 00:16:02.873 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:02.873 "is_configured": true, 00:16:02.873 "data_offset": 256, 00:16:02.873 "data_size": 7936 00:16:02.873 } 00:16:02.873 ] 00:16:02.873 }' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.873 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.133 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.133 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.133 "name": "raid_bdev1", 00:16:03.133 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:03.133 "strip_size_kb": 0, 00:16:03.133 "state": "online", 00:16:03.133 "raid_level": "raid1", 00:16:03.133 "superblock": true, 00:16:03.133 "num_base_bdevs": 2, 00:16:03.133 "num_base_bdevs_discovered": 2, 00:16:03.133 "num_base_bdevs_operational": 2, 00:16:03.133 "base_bdevs_list": [ 00:16:03.133 { 00:16:03.133 "name": "spare", 00:16:03.133 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:03.133 "is_configured": true, 00:16:03.133 "data_offset": 256, 00:16:03.133 "data_size": 7936 00:16:03.133 }, 00:16:03.133 { 00:16:03.133 "name": "BaseBdev2", 00:16:03.133 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:03.133 "is_configured": true, 00:16:03.133 "data_offset": 256, 00:16:03.133 "data_size": 7936 00:16:03.133 } 00:16:03.133 ] 00:16:03.133 }' 00:16:03.133 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.133 01:36:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.392 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.392 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.393 [2024-10-09 01:36:02.188539] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.393 [2024-10-09 01:36:02.188616] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.393 [2024-10-09 01:36:02.188719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.393 [2024-10-09 01:36:02.188831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.393 [2024-10-09 01:36:02.188864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:03.393 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:03.652 /dev/nbd0 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.652 1+0 records in 00:16:03.652 1+0 records out 00:16:03.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349812 s, 11.7 MB/s 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:03.652 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:03.912 /dev/nbd1 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.912 1+0 records in 00:16:03.912 1+0 records out 00:16:03.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409404 s, 10.0 MB/s 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:03.912 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:04.171 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:04.171 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.171 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:04.171 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.171 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:04.171 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.171 01:36:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:04.171 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.171 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.172 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.172 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.172 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.172 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.172 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:04.172 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.172 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.172 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.431 [2024-10-09 01:36:03.284473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.431 [2024-10-09 01:36:03.284540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.431 [2024-10-09 01:36:03.284567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:04.431 [2024-10-09 01:36:03.284576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.431 [2024-10-09 01:36:03.287064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.431 [2024-10-09 01:36:03.287172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.431 [2024-10-09 01:36:03.287256] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:04.431 [2024-10-09 01:36:03.287307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.431 [2024-10-09 01:36:03.287429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.431 spare 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.431 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.691 [2024-10-09 01:36:03.387495] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:04.691 [2024-10-09 01:36:03.387534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:04.691 [2024-10-09 01:36:03.387813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:16:04.691 [2024-10-09 01:36:03.387969] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:04.691 [2024-10-09 01:36:03.387986] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:04.691 [2024-10-09 01:36:03.388116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.691 "name": "raid_bdev1", 00:16:04.691 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:04.691 "strip_size_kb": 0, 00:16:04.691 "state": "online", 00:16:04.691 "raid_level": "raid1", 00:16:04.691 "superblock": true, 00:16:04.691 "num_base_bdevs": 2, 00:16:04.691 "num_base_bdevs_discovered": 2, 00:16:04.691 "num_base_bdevs_operational": 2, 00:16:04.691 "base_bdevs_list": [ 00:16:04.691 { 00:16:04.691 "name": "spare", 00:16:04.691 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:04.691 "is_configured": true, 00:16:04.691 "data_offset": 256, 00:16:04.691 "data_size": 7936 00:16:04.691 }, 00:16:04.691 { 00:16:04.691 "name": "BaseBdev2", 00:16:04.691 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:04.691 "is_configured": true, 00:16:04.691 "data_offset": 256, 00:16:04.691 "data_size": 7936 00:16:04.691 } 00:16:04.691 ] 00:16:04.691 }' 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.691 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.962 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.234 "name": "raid_bdev1", 00:16:05.234 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:05.234 "strip_size_kb": 0, 00:16:05.234 "state": "online", 00:16:05.234 "raid_level": "raid1", 00:16:05.234 "superblock": true, 00:16:05.234 "num_base_bdevs": 2, 00:16:05.234 "num_base_bdevs_discovered": 2, 00:16:05.234 "num_base_bdevs_operational": 2, 00:16:05.234 "base_bdevs_list": [ 00:16:05.234 { 00:16:05.234 "name": "spare", 00:16:05.234 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:05.234 "is_configured": true, 00:16:05.234 "data_offset": 256, 00:16:05.234 "data_size": 7936 00:16:05.234 }, 00:16:05.234 { 00:16:05.234 "name": "BaseBdev2", 00:16:05.234 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:05.234 "is_configured": true, 00:16:05.234 "data_offset": 256, 00:16:05.234 "data_size": 7936 00:16:05.234 } 00:16:05.234 ] 00:16:05.234 }' 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.234 01:36:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.234 [2024-10-09 01:36:04.008706] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.234 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.234 "name": "raid_bdev1", 00:16:05.234 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:05.234 "strip_size_kb": 0, 00:16:05.234 "state": "online", 00:16:05.234 "raid_level": "raid1", 00:16:05.234 "superblock": true, 00:16:05.234 "num_base_bdevs": 2, 00:16:05.234 "num_base_bdevs_discovered": 1, 00:16:05.235 "num_base_bdevs_operational": 1, 00:16:05.235 "base_bdevs_list": [ 00:16:05.235 { 00:16:05.235 "name": null, 00:16:05.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.235 "is_configured": false, 00:16:05.235 "data_offset": 0, 00:16:05.235 "data_size": 7936 00:16:05.235 }, 00:16:05.235 { 00:16:05.235 "name": "BaseBdev2", 00:16:05.235 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:05.235 "is_configured": true, 00:16:05.235 "data_offset": 256, 00:16:05.235 "data_size": 7936 00:16:05.235 } 00:16:05.235 ] 00:16:05.235 }' 00:16:05.235 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.235 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.804 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.804 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.804 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.804 [2024-10-09 01:36:04.420833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.804 [2024-10-09 01:36:04.421021] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:05.804 [2024-10-09 01:36:04.421085] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:05.804 [2024-10-09 01:36:04.421200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.804 [2024-10-09 01:36:04.428349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:16:05.804 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.804 01:36:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:05.804 [2024-10-09 01:36:04.430575] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.743 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.743 "name": "raid_bdev1", 00:16:06.743 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:06.743 "strip_size_kb": 0, 00:16:06.743 "state": "online", 00:16:06.743 "raid_level": "raid1", 00:16:06.743 "superblock": true, 00:16:06.743 "num_base_bdevs": 2, 00:16:06.743 "num_base_bdevs_discovered": 2, 00:16:06.743 "num_base_bdevs_operational": 2, 00:16:06.743 "process": { 00:16:06.743 "type": "rebuild", 00:16:06.743 "target": "spare", 00:16:06.743 "progress": { 00:16:06.743 "blocks": 2560, 00:16:06.743 "percent": 32 00:16:06.743 } 00:16:06.743 }, 00:16:06.743 "base_bdevs_list": [ 00:16:06.743 { 00:16:06.744 "name": "spare", 00:16:06.744 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:06.744 "is_configured": true, 00:16:06.744 "data_offset": 256, 00:16:06.744 "data_size": 7936 00:16:06.744 }, 00:16:06.744 { 00:16:06.744 "name": "BaseBdev2", 00:16:06.744 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:06.744 "is_configured": true, 00:16:06.744 "data_offset": 256, 00:16:06.744 "data_size": 7936 00:16:06.744 } 00:16:06.744 ] 00:16:06.744 }' 00:16:06.744 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.744 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.744 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.744 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.744 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.744 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.744 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.744 [2024-10-09 01:36:05.542042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.004 [2024-10-09 01:36:05.640120] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.004 [2024-10-09 01:36:05.640231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.004 [2024-10-09 01:36:05.640247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.004 [2024-10-09 01:36:05.640257] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.004 "name": "raid_bdev1", 00:16:07.004 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:07.004 "strip_size_kb": 0, 00:16:07.004 "state": "online", 00:16:07.004 "raid_level": "raid1", 00:16:07.004 "superblock": true, 00:16:07.004 "num_base_bdevs": 2, 00:16:07.004 "num_base_bdevs_discovered": 1, 00:16:07.004 "num_base_bdevs_operational": 1, 00:16:07.004 "base_bdevs_list": [ 00:16:07.004 { 00:16:07.004 "name": null, 00:16:07.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.004 "is_configured": false, 00:16:07.004 "data_offset": 0, 00:16:07.004 "data_size": 7936 00:16:07.004 }, 00:16:07.004 { 00:16:07.004 "name": "BaseBdev2", 00:16:07.004 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:07.004 "is_configured": true, 00:16:07.004 "data_offset": 256, 00:16:07.004 "data_size": 7936 00:16:07.004 } 00:16:07.004 ] 00:16:07.004 }' 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.004 01:36:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.263 01:36:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.263 01:36:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.263 01:36:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.263 [2024-10-09 01:36:06.134789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.263 [2024-10-09 01:36:06.134893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.263 [2024-10-09 01:36:06.134931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:07.263 [2024-10-09 01:36:06.134961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.263 [2024-10-09 01:36:06.135460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.263 [2024-10-09 01:36:06.135532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.264 [2024-10-09 01:36:06.135642] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:07.264 [2024-10-09 01:36:06.135690] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:07.264 [2024-10-09 01:36:06.135748] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.264 [2024-10-09 01:36:06.135803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.264 [2024-10-09 01:36:06.140712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:16:07.264 spare 00:16:07.264 01:36:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.264 01:36:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:07.264 [2024-10-09 01:36:06.142833] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.644 "name": "raid_bdev1", 00:16:08.644 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:08.644 "strip_size_kb": 0, 00:16:08.644 "state": "online", 00:16:08.644 "raid_level": "raid1", 00:16:08.644 "superblock": true, 00:16:08.644 "num_base_bdevs": 2, 00:16:08.644 "num_base_bdevs_discovered": 2, 00:16:08.644 "num_base_bdevs_operational": 2, 00:16:08.644 "process": { 00:16:08.644 "type": "rebuild", 00:16:08.644 "target": "spare", 00:16:08.644 "progress": { 00:16:08.644 "blocks": 2560, 00:16:08.644 "percent": 32 00:16:08.644 } 00:16:08.644 }, 00:16:08.644 "base_bdevs_list": [ 00:16:08.644 { 00:16:08.644 "name": "spare", 00:16:08.644 "uuid": "fd3f00da-d245-5d41-b375-0415df8eda21", 00:16:08.644 "is_configured": true, 00:16:08.644 "data_offset": 256, 00:16:08.644 "data_size": 7936 00:16:08.644 }, 00:16:08.644 { 00:16:08.644 "name": "BaseBdev2", 00:16:08.644 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:08.644 "is_configured": true, 00:16:08.644 "data_offset": 256, 00:16:08.644 "data_size": 7936 00:16:08.644 } 00:16:08.644 ] 00:16:08.644 }' 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.644 [2024-10-09 01:36:07.283756] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.644 [2024-10-09 01:36:07.352410] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.644 [2024-10-09 01:36:07.352464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.644 [2024-10-09 01:36:07.352482] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.644 [2024-10-09 01:36:07.352489] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.644 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.644 "name": "raid_bdev1", 00:16:08.644 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:08.644 "strip_size_kb": 0, 00:16:08.644 "state": "online", 00:16:08.644 "raid_level": "raid1", 00:16:08.644 "superblock": true, 00:16:08.644 "num_base_bdevs": 2, 00:16:08.644 "num_base_bdevs_discovered": 1, 00:16:08.644 "num_base_bdevs_operational": 1, 00:16:08.644 "base_bdevs_list": [ 00:16:08.644 { 00:16:08.644 "name": null, 00:16:08.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.644 "is_configured": false, 00:16:08.644 "data_offset": 0, 00:16:08.644 "data_size": 7936 00:16:08.644 }, 00:16:08.644 { 00:16:08.644 "name": "BaseBdev2", 00:16:08.645 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:08.645 "is_configured": true, 00:16:08.645 "data_offset": 256, 00:16:08.645 "data_size": 7936 00:16:08.645 } 00:16:08.645 ] 00:16:08.645 }' 00:16:08.645 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.645 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.215 "name": "raid_bdev1", 00:16:09.215 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:09.215 "strip_size_kb": 0, 00:16:09.215 "state": "online", 00:16:09.215 "raid_level": "raid1", 00:16:09.215 "superblock": true, 00:16:09.215 "num_base_bdevs": 2, 00:16:09.215 "num_base_bdevs_discovered": 1, 00:16:09.215 "num_base_bdevs_operational": 1, 00:16:09.215 "base_bdevs_list": [ 00:16:09.215 { 00:16:09.215 "name": null, 00:16:09.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.215 "is_configured": false, 00:16:09.215 "data_offset": 0, 00:16:09.215 "data_size": 7936 00:16:09.215 }, 00:16:09.215 { 00:16:09.215 "name": "BaseBdev2", 00:16:09.215 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:09.215 "is_configured": true, 00:16:09.215 "data_offset": 256, 00:16:09.215 "data_size": 7936 00:16:09.215 } 00:16:09.215 ] 00:16:09.215 }' 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.215 [2024-10-09 01:36:07.982633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.215 [2024-10-09 01:36:07.982682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.215 [2024-10-09 01:36:07.982706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:09.215 [2024-10-09 01:36:07.982714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.215 [2024-10-09 01:36:07.983141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.215 [2024-10-09 01:36:07.983162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.215 [2024-10-09 01:36:07.983239] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:09.215 [2024-10-09 01:36:07.983258] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.215 [2024-10-09 01:36:07.983269] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:09.215 [2024-10-09 01:36:07.983279] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:09.215 BaseBdev1 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.215 01:36:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.158 01:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.158 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.158 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.158 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.158 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.158 "name": "raid_bdev1", 00:16:10.158 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:10.158 "strip_size_kb": 0, 00:16:10.158 "state": "online", 00:16:10.158 "raid_level": "raid1", 00:16:10.158 "superblock": true, 00:16:10.158 "num_base_bdevs": 2, 00:16:10.158 "num_base_bdevs_discovered": 1, 00:16:10.158 "num_base_bdevs_operational": 1, 00:16:10.158 "base_bdevs_list": [ 00:16:10.158 { 00:16:10.158 "name": null, 00:16:10.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.158 "is_configured": false, 00:16:10.158 "data_offset": 0, 00:16:10.158 "data_size": 7936 00:16:10.158 }, 00:16:10.158 { 00:16:10.158 "name": "BaseBdev2", 00:16:10.158 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:10.158 "is_configured": true, 00:16:10.158 "data_offset": 256, 00:16:10.158 "data_size": 7936 00:16:10.158 } 00:16:10.158 ] 00:16:10.158 }' 00:16:10.158 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.158 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.728 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.728 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.729 "name": "raid_bdev1", 00:16:10.729 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:10.729 "strip_size_kb": 0, 00:16:10.729 "state": "online", 00:16:10.729 "raid_level": "raid1", 00:16:10.729 "superblock": true, 00:16:10.729 "num_base_bdevs": 2, 00:16:10.729 "num_base_bdevs_discovered": 1, 00:16:10.729 "num_base_bdevs_operational": 1, 00:16:10.729 "base_bdevs_list": [ 00:16:10.729 { 00:16:10.729 "name": null, 00:16:10.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.729 "is_configured": false, 00:16:10.729 "data_offset": 0, 00:16:10.729 "data_size": 7936 00:16:10.729 }, 00:16:10.729 { 00:16:10.729 "name": "BaseBdev2", 00:16:10.729 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:10.729 "is_configured": true, 00:16:10.729 "data_offset": 256, 00:16:10.729 "data_size": 7936 00:16:10.729 } 00:16:10.729 ] 00:16:10.729 }' 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.729 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.989 [2024-10-09 01:36:09.631068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.989 [2024-10-09 01:36:09.631197] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:10.989 [2024-10-09 01:36:09.631213] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:10.989 request: 00:16:10.989 { 00:16:10.989 "base_bdev": "BaseBdev1", 00:16:10.989 "raid_bdev": "raid_bdev1", 00:16:10.989 "method": "bdev_raid_add_base_bdev", 00:16:10.989 "req_id": 1 00:16:10.989 } 00:16:10.989 Got JSON-RPC error response 00:16:10.989 response: 00:16:10.989 { 00:16:10.989 "code": -22, 00:16:10.989 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:10.989 } 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:10.989 01:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:11.929 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.929 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.929 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.929 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.929 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.930 "name": "raid_bdev1", 00:16:11.930 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:11.930 "strip_size_kb": 0, 00:16:11.930 "state": "online", 00:16:11.930 "raid_level": "raid1", 00:16:11.930 "superblock": true, 00:16:11.930 "num_base_bdevs": 2, 00:16:11.930 "num_base_bdevs_discovered": 1, 00:16:11.930 "num_base_bdevs_operational": 1, 00:16:11.930 "base_bdevs_list": [ 00:16:11.930 { 00:16:11.930 "name": null, 00:16:11.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.930 "is_configured": false, 00:16:11.930 "data_offset": 0, 00:16:11.930 "data_size": 7936 00:16:11.930 }, 00:16:11.930 { 00:16:11.930 "name": "BaseBdev2", 00:16:11.930 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:11.930 "is_configured": true, 00:16:11.930 "data_offset": 256, 00:16:11.930 "data_size": 7936 00:16:11.930 } 00:16:11.930 ] 00:16:11.930 }' 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.930 01:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.501 "name": "raid_bdev1", 00:16:12.501 "uuid": "ef5bc67e-cb6c-4ead-9b7e-668766f7e5c5", 00:16:12.501 "strip_size_kb": 0, 00:16:12.501 "state": "online", 00:16:12.501 "raid_level": "raid1", 00:16:12.501 "superblock": true, 00:16:12.501 "num_base_bdevs": 2, 00:16:12.501 "num_base_bdevs_discovered": 1, 00:16:12.501 "num_base_bdevs_operational": 1, 00:16:12.501 "base_bdevs_list": [ 00:16:12.501 { 00:16:12.501 "name": null, 00:16:12.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.501 "is_configured": false, 00:16:12.501 "data_offset": 0, 00:16:12.501 "data_size": 7936 00:16:12.501 }, 00:16:12.501 { 00:16:12.501 "name": "BaseBdev2", 00:16:12.501 "uuid": "0a4a7834-2aa0-558d-b8e2-c0be6082a4dd", 00:16:12.501 "is_configured": true, 00:16:12.501 "data_offset": 256, 00:16:12.501 "data_size": 7936 00:16:12.501 } 00:16:12.501 ] 00:16:12.501 }' 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 98038 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 98038 ']' 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 98038 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98038 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:12.501 killing process with pid 98038 00:16:12.501 Received shutdown signal, test time was about 60.000000 seconds 00:16:12.501 00:16:12.501 Latency(us) 00:16:12.501 [2024-10-09T01:36:11.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.501 [2024-10-09T01:36:11.394Z] =================================================================================================================== 00:16:12.501 [2024-10-09T01:36:11.394Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98038' 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 98038 00:16:12.501 [2024-10-09 01:36:11.251116] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.501 [2024-10-09 01:36:11.251215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.501 [2024-10-09 01:36:11.251256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.501 [2024-10-09 01:36:11.251268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:12.501 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 98038 00:16:12.501 [2024-10-09 01:36:11.308064] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.072 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:13.072 00:16:13.072 real 0m18.479s 00:16:13.072 user 0m24.299s 00:16:13.072 sys 0m2.672s 00:16:13.072 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:13.072 ************************************ 00:16:13.072 END TEST raid_rebuild_test_sb_4k 00:16:13.072 ************************************ 00:16:13.072 01:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.072 01:36:11 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:13.072 01:36:11 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:13.072 01:36:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:13.072 01:36:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:13.072 01:36:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.072 ************************************ 00:16:13.072 START TEST raid_state_function_test_sb_md_separate 00:16:13.072 ************************************ 00:16:13.072 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=98712 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98712' 00:16:13.073 Process raid pid: 98712 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 98712 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98712 ']' 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:13.073 01:36:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.073 [2024-10-09 01:36:11.844680] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:13.073 [2024-10-09 01:36:11.844918] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.333 [2024-10-09 01:36:11.978401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:13.333 [2024-10-09 01:36:12.008091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.333 [2024-10-09 01:36:12.080251] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.333 [2024-10-09 01:36:12.155576] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.333 [2024-10-09 01:36:12.155689] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.904 [2024-10-09 01:36:12.683733] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.904 [2024-10-09 01:36:12.683848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.904 [2024-10-09 01:36:12.683881] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.904 [2024-10-09 01:36:12.683901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.904 "name": "Existed_Raid", 00:16:13.904 "uuid": "c1515c41-161a-4f80-a9e3-a3df5564ce7c", 00:16:13.904 "strip_size_kb": 0, 00:16:13.904 "state": "configuring", 00:16:13.904 "raid_level": "raid1", 00:16:13.904 "superblock": true, 00:16:13.904 "num_base_bdevs": 2, 00:16:13.904 "num_base_bdevs_discovered": 0, 00:16:13.904 "num_base_bdevs_operational": 2, 00:16:13.904 "base_bdevs_list": [ 00:16:13.904 { 00:16:13.904 "name": "BaseBdev1", 00:16:13.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.904 "is_configured": false, 00:16:13.904 "data_offset": 0, 00:16:13.904 "data_size": 0 00:16:13.904 }, 00:16:13.904 { 00:16:13.904 "name": "BaseBdev2", 00:16:13.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.904 "is_configured": false, 00:16:13.904 "data_offset": 0, 00:16:13.904 "data_size": 0 00:16:13.904 } 00:16:13.904 ] 00:16:13.904 }' 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.904 01:36:12 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.475 [2024-10-09 01:36:13.107733] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.475 [2024-10-09 01:36:13.107808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.475 [2024-10-09 01:36:13.115743] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.475 [2024-10-09 01:36:13.115776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.475 [2024-10-09 01:36:13.115786] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.475 [2024-10-09 01:36:13.115793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.475 [2024-10-09 01:36:13.139758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.475 BaseBdev1 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.475 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.476 [ 00:16:14.476 { 00:16:14.476 "name": "BaseBdev1", 00:16:14.476 "aliases": [ 00:16:14.476 "6380c874-f407-42e9-a37e-f55bce580d6b" 00:16:14.476 ], 00:16:14.476 "product_name": "Malloc disk", 00:16:14.476 "block_size": 4096, 00:16:14.476 "num_blocks": 8192, 00:16:14.476 "uuid": "6380c874-f407-42e9-a37e-f55bce580d6b", 00:16:14.476 "md_size": 32, 00:16:14.476 "md_interleave": false, 00:16:14.476 "dif_type": 0, 00:16:14.476 "assigned_rate_limits": { 00:16:14.476 "rw_ios_per_sec": 0, 00:16:14.476 "rw_mbytes_per_sec": 0, 00:16:14.476 "r_mbytes_per_sec": 0, 00:16:14.476 "w_mbytes_per_sec": 0 00:16:14.476 }, 00:16:14.476 "claimed": true, 00:16:14.476 "claim_type": "exclusive_write", 00:16:14.476 "zoned": false, 00:16:14.476 "supported_io_types": { 00:16:14.476 "read": true, 00:16:14.476 "write": true, 00:16:14.476 "unmap": true, 00:16:14.476 "flush": true, 00:16:14.476 "reset": true, 00:16:14.476 "nvme_admin": false, 00:16:14.476 "nvme_io": false, 00:16:14.476 "nvme_io_md": false, 00:16:14.476 "write_zeroes": true, 00:16:14.476 "zcopy": true, 00:16:14.476 "get_zone_info": false, 00:16:14.476 "zone_management": false, 00:16:14.476 "zone_append": false, 00:16:14.476 "compare": false, 00:16:14.476 "compare_and_write": false, 00:16:14.476 "abort": true, 00:16:14.476 "seek_hole": false, 00:16:14.476 "seek_data": false, 00:16:14.476 "copy": true, 00:16:14.476 "nvme_iov_md": false 00:16:14.476 }, 00:16:14.476 "memory_domains": [ 00:16:14.476 { 00:16:14.476 "dma_device_id": "system", 00:16:14.476 "dma_device_type": 1 00:16:14.476 }, 00:16:14.476 { 00:16:14.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.476 "dma_device_type": 2 00:16:14.476 } 00:16:14.476 ], 00:16:14.476 "driver_specific": {} 00:16:14.476 } 00:16:14.476 ] 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.476 "name": "Existed_Raid", 00:16:14.476 "uuid": "b5d1882f-71cf-446f-85b6-3d6fed35232e", 00:16:14.476 "strip_size_kb": 0, 00:16:14.476 "state": "configuring", 00:16:14.476 "raid_level": "raid1", 00:16:14.476 "superblock": true, 00:16:14.476 "num_base_bdevs": 2, 00:16:14.476 "num_base_bdevs_discovered": 1, 00:16:14.476 "num_base_bdevs_operational": 2, 00:16:14.476 "base_bdevs_list": [ 00:16:14.476 { 00:16:14.476 "name": "BaseBdev1", 00:16:14.476 "uuid": "6380c874-f407-42e9-a37e-f55bce580d6b", 00:16:14.476 "is_configured": true, 00:16:14.476 "data_offset": 256, 00:16:14.476 "data_size": 7936 00:16:14.476 }, 00:16:14.476 { 00:16:14.476 "name": "BaseBdev2", 00:16:14.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.476 "is_configured": false, 00:16:14.476 "data_offset": 0, 00:16:14.476 "data_size": 0 00:16:14.476 } 00:16:14.476 ] 00:16:14.476 }' 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.476 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.736 [2024-10-09 01:36:13.559878] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.736 [2024-10-09 01:36:13.559965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.736 [2024-10-09 01:36:13.571969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.736 [2024-10-09 01:36:13.574100] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.736 [2024-10-09 01:36:13.574140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.736 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.737 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.737 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.737 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.737 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.737 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.737 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.737 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:14.737 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.997 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.997 "name": "Existed_Raid", 00:16:14.997 "uuid": "ae16df6b-e5ba-4175-8aa5-33a5f9d65e78", 00:16:14.997 "strip_size_kb": 0, 00:16:14.997 "state": "configuring", 00:16:14.997 "raid_level": "raid1", 00:16:14.997 "superblock": true, 00:16:14.997 "num_base_bdevs": 2, 00:16:14.997 "num_base_bdevs_discovered": 1, 00:16:14.997 "num_base_bdevs_operational": 2, 00:16:14.997 "base_bdevs_list": [ 00:16:14.997 { 00:16:14.997 "name": "BaseBdev1", 00:16:14.997 "uuid": "6380c874-f407-42e9-a37e-f55bce580d6b", 00:16:14.997 "is_configured": true, 00:16:14.997 "data_offset": 256, 00:16:14.997 "data_size": 7936 00:16:14.997 }, 00:16:14.997 { 00:16:14.997 "name": "BaseBdev2", 00:16:14.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.997 "is_configured": false, 00:16:14.997 "data_offset": 0, 00:16:14.997 "data_size": 0 00:16:14.997 } 00:16:14.997 ] 00:16:14.997 }' 00:16:14.997 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.997 01:36:13 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.258 [2024-10-09 01:36:14.059925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.258 [2024-10-09 01:36:14.060640] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:15.258 [2024-10-09 01:36:14.060838] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:15.258 [2024-10-09 01:36:14.061174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:15.258 BaseBdev2 00:16:15.258 [2024-10-09 01:36:14.061637] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:15.258 [2024-10-09 01:36:14.061800] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.258 [2024-10-09 01:36:14.062200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.258 [ 00:16:15.258 { 00:16:15.258 "name": "BaseBdev2", 00:16:15.258 "aliases": [ 00:16:15.258 "bb6c9378-6d96-4e00-a750-6c4a938d441f" 00:16:15.258 ], 00:16:15.258 "product_name": "Malloc disk", 00:16:15.258 "block_size": 4096, 00:16:15.258 "num_blocks": 8192, 00:16:15.258 "uuid": "bb6c9378-6d96-4e00-a750-6c4a938d441f", 00:16:15.258 "md_size": 32, 00:16:15.258 "md_interleave": false, 00:16:15.258 "dif_type": 0, 00:16:15.258 "assigned_rate_limits": { 00:16:15.258 "rw_ios_per_sec": 0, 00:16:15.258 "rw_mbytes_per_sec": 0, 00:16:15.258 "r_mbytes_per_sec": 0, 00:16:15.258 "w_mbytes_per_sec": 0 00:16:15.258 }, 00:16:15.258 "claimed": true, 00:16:15.258 "claim_type": "exclusive_write", 00:16:15.258 "zoned": false, 00:16:15.258 "supported_io_types": { 00:16:15.258 "read": true, 00:16:15.258 "write": true, 00:16:15.258 "unmap": true, 00:16:15.258 "flush": true, 00:16:15.258 "reset": true, 00:16:15.258 "nvme_admin": false, 00:16:15.258 "nvme_io": false, 00:16:15.258 "nvme_io_md": false, 00:16:15.258 "write_zeroes": true, 00:16:15.258 "zcopy": true, 00:16:15.258 "get_zone_info": false, 00:16:15.258 "zone_management": false, 00:16:15.258 "zone_append": false, 00:16:15.258 "compare": false, 00:16:15.258 "compare_and_write": false, 00:16:15.258 "abort": true, 00:16:15.258 "seek_hole": false, 00:16:15.258 "seek_data": false, 00:16:15.258 "copy": true, 00:16:15.258 "nvme_iov_md": false 00:16:15.258 }, 00:16:15.258 "memory_domains": [ 00:16:15.258 { 00:16:15.258 "dma_device_id": "system", 00:16:15.258 "dma_device_type": 1 00:16:15.258 }, 00:16:15.258 { 00:16:15.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.258 "dma_device_type": 2 00:16:15.258 } 00:16:15.258 ], 00:16:15.258 "driver_specific": {} 00:16:15.258 } 00:16:15.258 ] 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.258 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.519 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.519 "name": "Existed_Raid", 00:16:15.519 "uuid": "ae16df6b-e5ba-4175-8aa5-33a5f9d65e78", 00:16:15.519 "strip_size_kb": 0, 00:16:15.519 "state": "online", 00:16:15.519 "raid_level": "raid1", 00:16:15.519 "superblock": true, 00:16:15.519 "num_base_bdevs": 2, 00:16:15.519 "num_base_bdevs_discovered": 2, 00:16:15.519 "num_base_bdevs_operational": 2, 00:16:15.519 "base_bdevs_list": [ 00:16:15.519 { 00:16:15.519 "name": "BaseBdev1", 00:16:15.519 "uuid": "6380c874-f407-42e9-a37e-f55bce580d6b", 00:16:15.519 "is_configured": true, 00:16:15.519 "data_offset": 256, 00:16:15.519 "data_size": 7936 00:16:15.519 }, 00:16:15.519 { 00:16:15.519 "name": "BaseBdev2", 00:16:15.519 "uuid": "bb6c9378-6d96-4e00-a750-6c4a938d441f", 00:16:15.519 "is_configured": true, 00:16:15.519 "data_offset": 256, 00:16:15.519 "data_size": 7936 00:16:15.519 } 00:16:15.519 ] 00:16:15.519 }' 00:16:15.519 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.519 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:15.779 [2024-10-09 01:36:14.552254] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:15.779 "name": "Existed_Raid", 00:16:15.779 "aliases": [ 00:16:15.779 "ae16df6b-e5ba-4175-8aa5-33a5f9d65e78" 00:16:15.779 ], 00:16:15.779 "product_name": "Raid Volume", 00:16:15.779 "block_size": 4096, 00:16:15.779 "num_blocks": 7936, 00:16:15.779 "uuid": "ae16df6b-e5ba-4175-8aa5-33a5f9d65e78", 00:16:15.779 "md_size": 32, 00:16:15.779 "md_interleave": false, 00:16:15.779 "dif_type": 0, 00:16:15.779 "assigned_rate_limits": { 00:16:15.779 "rw_ios_per_sec": 0, 00:16:15.779 "rw_mbytes_per_sec": 0, 00:16:15.779 "r_mbytes_per_sec": 0, 00:16:15.779 "w_mbytes_per_sec": 0 00:16:15.779 }, 00:16:15.779 "claimed": false, 00:16:15.779 "zoned": false, 00:16:15.779 "supported_io_types": { 00:16:15.779 "read": true, 00:16:15.779 "write": true, 00:16:15.779 "unmap": false, 00:16:15.779 "flush": false, 00:16:15.779 "reset": true, 00:16:15.779 "nvme_admin": false, 00:16:15.779 "nvme_io": false, 00:16:15.779 "nvme_io_md": false, 00:16:15.779 "write_zeroes": true, 00:16:15.779 "zcopy": false, 00:16:15.779 "get_zone_info": false, 00:16:15.779 "zone_management": false, 00:16:15.779 "zone_append": false, 00:16:15.779 "compare": false, 00:16:15.779 "compare_and_write": false, 00:16:15.779 "abort": false, 00:16:15.779 "seek_hole": false, 00:16:15.779 "seek_data": false, 00:16:15.779 "copy": false, 00:16:15.779 "nvme_iov_md": false 00:16:15.779 }, 00:16:15.779 "memory_domains": [ 00:16:15.779 { 00:16:15.779 "dma_device_id": "system", 00:16:15.779 "dma_device_type": 1 00:16:15.779 }, 00:16:15.779 { 00:16:15.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.779 "dma_device_type": 2 00:16:15.779 }, 00:16:15.779 { 00:16:15.779 "dma_device_id": "system", 00:16:15.779 "dma_device_type": 1 00:16:15.779 }, 00:16:15.779 { 00:16:15.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.779 "dma_device_type": 2 00:16:15.779 } 00:16:15.779 ], 00:16:15.779 "driver_specific": { 00:16:15.779 "raid": { 00:16:15.779 "uuid": "ae16df6b-e5ba-4175-8aa5-33a5f9d65e78", 00:16:15.779 "strip_size_kb": 0, 00:16:15.779 "state": "online", 00:16:15.779 "raid_level": "raid1", 00:16:15.779 "superblock": true, 00:16:15.779 "num_base_bdevs": 2, 00:16:15.779 "num_base_bdevs_discovered": 2, 00:16:15.779 "num_base_bdevs_operational": 2, 00:16:15.779 "base_bdevs_list": [ 00:16:15.779 { 00:16:15.779 "name": "BaseBdev1", 00:16:15.779 "uuid": "6380c874-f407-42e9-a37e-f55bce580d6b", 00:16:15.779 "is_configured": true, 00:16:15.779 "data_offset": 256, 00:16:15.779 "data_size": 7936 00:16:15.779 }, 00:16:15.779 { 00:16:15.779 "name": "BaseBdev2", 00:16:15.779 "uuid": "bb6c9378-6d96-4e00-a750-6c4a938d441f", 00:16:15.779 "is_configured": true, 00:16:15.779 "data_offset": 256, 00:16:15.779 "data_size": 7936 00:16:15.779 } 00:16:15.779 ] 00:16:15.779 } 00:16:15.779 } 00:16:15.779 }' 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:15.779 BaseBdev2' 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.779 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.040 [2024-10-09 01:36:14.772161] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.040 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.040 "name": "Existed_Raid", 00:16:16.040 "uuid": "ae16df6b-e5ba-4175-8aa5-33a5f9d65e78", 00:16:16.040 "strip_size_kb": 0, 00:16:16.040 "state": "online", 00:16:16.040 "raid_level": "raid1", 00:16:16.040 "superblock": true, 00:16:16.040 "num_base_bdevs": 2, 00:16:16.040 "num_base_bdevs_discovered": 1, 00:16:16.040 "num_base_bdevs_operational": 1, 00:16:16.040 "base_bdevs_list": [ 00:16:16.040 { 00:16:16.040 "name": null, 00:16:16.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.040 "is_configured": false, 00:16:16.040 "data_offset": 0, 00:16:16.040 "data_size": 7936 00:16:16.040 }, 00:16:16.040 { 00:16:16.040 "name": "BaseBdev2", 00:16:16.040 "uuid": "bb6c9378-6d96-4e00-a750-6c4a938d441f", 00:16:16.040 "is_configured": true, 00:16:16.040 "data_offset": 256, 00:16:16.040 "data_size": 7936 00:16:16.041 } 00:16:16.041 ] 00:16:16.041 }' 00:16:16.041 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.041 01:36:14 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.612 [2024-10-09 01:36:15.325939] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:16.612 [2024-10-09 01:36:15.326051] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.612 [2024-10-09 01:36:15.348287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.612 [2024-10-09 01:36:15.348341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.612 [2024-10-09 01:36:15.348359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 98712 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98712 ']' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98712 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98712 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:16.612 killing process with pid 98712 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98712' 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98712 00:16:16.612 [2024-10-09 01:36:15.429991] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.612 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98712 00:16:16.612 [2024-10-09 01:36:15.431502] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.182 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:17.182 00:16:17.182 real 0m4.059s 00:16:17.182 user 0m6.150s 00:16:17.182 sys 0m0.910s 00:16:17.182 ************************************ 00:16:17.182 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:17.182 01:36:15 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.182 END TEST raid_state_function_test_sb_md_separate 00:16:17.182 ************************************ 00:16:17.182 01:36:15 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:17.182 01:36:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:17.182 01:36:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:17.182 01:36:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.182 ************************************ 00:16:17.182 START TEST raid_superblock_test_md_separate 00:16:17.182 ************************************ 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=98953 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 98953 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98953 ']' 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.182 01:36:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:17.182 [2024-10-09 01:36:15.976218] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:17.182 [2024-10-09 01:36:15.976446] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98953 ] 00:16:17.442 [2024-10-09 01:36:16.108933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:17.442 [2024-10-09 01:36:16.138444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.442 [2024-10-09 01:36:16.208932] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.442 [2024-10-09 01:36:16.284100] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.442 [2024-10-09 01:36:16.284142] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.013 malloc1 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.013 [2024-10-09 01:36:16.832086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.013 [2024-10-09 01:36:16.832231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.013 [2024-10-09 01:36:16.832277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:18.013 [2024-10-09 01:36:16.832307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.013 [2024-10-09 01:36:16.834560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.013 [2024-10-09 01:36:16.834628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.013 pt1 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.013 malloc2 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.013 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.014 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.014 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.014 [2024-10-09 01:36:16.891086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.014 [2024-10-09 01:36:16.891172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.014 [2024-10-09 01:36:16.891205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:18.014 [2024-10-09 01:36:16.891221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.014 [2024-10-09 01:36:16.894866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.014 [2024-10-09 01:36:16.894920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.014 pt2 00:16:18.014 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.014 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:18.014 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:18.014 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:18.014 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.014 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.014 [2024-10-09 01:36:16.903195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.274 [2024-10-09 01:36:16.905825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.274 [2024-10-09 01:36:16.906020] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:18.274 [2024-10-09 01:36:16.906041] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:18.274 [2024-10-09 01:36:16.906150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:18.274 [2024-10-09 01:36:16.906287] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:18.274 [2024-10-09 01:36:16.906301] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:18.274 [2024-10-09 01:36:16.906410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.274 "name": "raid_bdev1", 00:16:18.274 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:18.274 "strip_size_kb": 0, 00:16:18.274 "state": "online", 00:16:18.274 "raid_level": "raid1", 00:16:18.274 "superblock": true, 00:16:18.274 "num_base_bdevs": 2, 00:16:18.274 "num_base_bdevs_discovered": 2, 00:16:18.274 "num_base_bdevs_operational": 2, 00:16:18.274 "base_bdevs_list": [ 00:16:18.274 { 00:16:18.274 "name": "pt1", 00:16:18.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.274 "is_configured": true, 00:16:18.274 "data_offset": 256, 00:16:18.274 "data_size": 7936 00:16:18.274 }, 00:16:18.274 { 00:16:18.274 "name": "pt2", 00:16:18.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.274 "is_configured": true, 00:16:18.274 "data_offset": 256, 00:16:18.274 "data_size": 7936 00:16:18.274 } 00:16:18.274 ] 00:16:18.274 }' 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.274 01:36:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.534 [2024-10-09 01:36:17.387543] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.534 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.795 "name": "raid_bdev1", 00:16:18.795 "aliases": [ 00:16:18.795 "82d77230-3291-41c8-81d5-f3dbdcc15abe" 00:16:18.795 ], 00:16:18.795 "product_name": "Raid Volume", 00:16:18.795 "block_size": 4096, 00:16:18.795 "num_blocks": 7936, 00:16:18.795 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:18.795 "md_size": 32, 00:16:18.795 "md_interleave": false, 00:16:18.795 "dif_type": 0, 00:16:18.795 "assigned_rate_limits": { 00:16:18.795 "rw_ios_per_sec": 0, 00:16:18.795 "rw_mbytes_per_sec": 0, 00:16:18.795 "r_mbytes_per_sec": 0, 00:16:18.795 "w_mbytes_per_sec": 0 00:16:18.795 }, 00:16:18.795 "claimed": false, 00:16:18.795 "zoned": false, 00:16:18.795 "supported_io_types": { 00:16:18.795 "read": true, 00:16:18.795 "write": true, 00:16:18.795 "unmap": false, 00:16:18.795 "flush": false, 00:16:18.795 "reset": true, 00:16:18.795 "nvme_admin": false, 00:16:18.795 "nvme_io": false, 00:16:18.795 "nvme_io_md": false, 00:16:18.795 "write_zeroes": true, 00:16:18.795 "zcopy": false, 00:16:18.795 "get_zone_info": false, 00:16:18.795 "zone_management": false, 00:16:18.795 "zone_append": false, 00:16:18.795 "compare": false, 00:16:18.795 "compare_and_write": false, 00:16:18.795 "abort": false, 00:16:18.795 "seek_hole": false, 00:16:18.795 "seek_data": false, 00:16:18.795 "copy": false, 00:16:18.795 "nvme_iov_md": false 00:16:18.795 }, 00:16:18.795 "memory_domains": [ 00:16:18.795 { 00:16:18.795 "dma_device_id": "system", 00:16:18.795 "dma_device_type": 1 00:16:18.795 }, 00:16:18.795 { 00:16:18.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.795 "dma_device_type": 2 00:16:18.795 }, 00:16:18.795 { 00:16:18.795 "dma_device_id": "system", 00:16:18.795 "dma_device_type": 1 00:16:18.795 }, 00:16:18.795 { 00:16:18.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.795 "dma_device_type": 2 00:16:18.795 } 00:16:18.795 ], 00:16:18.795 "driver_specific": { 00:16:18.795 "raid": { 00:16:18.795 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:18.795 "strip_size_kb": 0, 00:16:18.795 "state": "online", 00:16:18.795 "raid_level": "raid1", 00:16:18.795 "superblock": true, 00:16:18.795 "num_base_bdevs": 2, 00:16:18.795 "num_base_bdevs_discovered": 2, 00:16:18.795 "num_base_bdevs_operational": 2, 00:16:18.795 "base_bdevs_list": [ 00:16:18.795 { 00:16:18.795 "name": "pt1", 00:16:18.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.795 "is_configured": true, 00:16:18.795 "data_offset": 256, 00:16:18.795 "data_size": 7936 00:16:18.795 }, 00:16:18.795 { 00:16:18.795 "name": "pt2", 00:16:18.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.795 "is_configured": true, 00:16:18.795 "data_offset": 256, 00:16:18.795 "data_size": 7936 00:16:18.795 } 00:16:18.795 ] 00:16:18.795 } 00:16:18.795 } 00:16:18.795 }' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:18.795 pt2' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.795 [2024-10-09 01:36:17.619457] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=82d77230-3291-41c8-81d5-f3dbdcc15abe 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 82d77230-3291-41c8-81d5-f3dbdcc15abe ']' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.795 [2024-10-09 01:36:17.663255] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.795 [2024-10-09 01:36:17.663278] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.795 [2024-10-09 01:36:17.663362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.795 [2024-10-09 01:36:17.663416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.795 [2024-10-09 01:36:17.663439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:18.795 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.056 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.056 [2024-10-09 01:36:17.803288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:19.056 [2024-10-09 01:36:17.805295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:19.056 [2024-10-09 01:36:17.805409] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:19.056 [2024-10-09 01:36:17.805456] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:19.057 [2024-10-09 01:36:17.805470] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.057 [2024-10-09 01:36:17.805479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:19.057 request: 00:16:19.057 { 00:16:19.057 "name": "raid_bdev1", 00:16:19.057 "raid_level": "raid1", 00:16:19.057 "base_bdevs": [ 00:16:19.057 "malloc1", 00:16:19.057 "malloc2" 00:16:19.057 ], 00:16:19.057 "superblock": false, 00:16:19.057 "method": "bdev_raid_create", 00:16:19.057 "req_id": 1 00:16:19.057 } 00:16:19.057 Got JSON-RPC error response 00:16:19.057 response: 00:16:19.057 { 00:16:19.057 "code": -17, 00:16:19.057 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:19.057 } 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.057 [2024-10-09 01:36:17.855295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:19.057 [2024-10-09 01:36:17.855379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.057 [2024-10-09 01:36:17.855409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:19.057 [2024-10-09 01:36:17.855438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.057 [2024-10-09 01:36:17.857471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.057 [2024-10-09 01:36:17.857551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:19.057 [2024-10-09 01:36:17.857609] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:19.057 [2024-10-09 01:36:17.857668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:19.057 pt1 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.057 "name": "raid_bdev1", 00:16:19.057 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:19.057 "strip_size_kb": 0, 00:16:19.057 "state": "configuring", 00:16:19.057 "raid_level": "raid1", 00:16:19.057 "superblock": true, 00:16:19.057 "num_base_bdevs": 2, 00:16:19.057 "num_base_bdevs_discovered": 1, 00:16:19.057 "num_base_bdevs_operational": 2, 00:16:19.057 "base_bdevs_list": [ 00:16:19.057 { 00:16:19.057 "name": "pt1", 00:16:19.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.057 "is_configured": true, 00:16:19.057 "data_offset": 256, 00:16:19.057 "data_size": 7936 00:16:19.057 }, 00:16:19.057 { 00:16:19.057 "name": null, 00:16:19.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.057 "is_configured": false, 00:16:19.057 "data_offset": 256, 00:16:19.057 "data_size": 7936 00:16:19.057 } 00:16:19.057 ] 00:16:19.057 }' 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.057 01:36:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.629 [2024-10-09 01:36:18.295398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.629 [2024-10-09 01:36:18.295448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.629 [2024-10-09 01:36:18.295464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:19.629 [2024-10-09 01:36:18.295473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.629 [2024-10-09 01:36:18.295605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.629 [2024-10-09 01:36:18.295620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.629 [2024-10-09 01:36:18.295653] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:19.629 [2024-10-09 01:36:18.295671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.629 [2024-10-09 01:36:18.295736] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:19.629 [2024-10-09 01:36:18.295745] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:19.629 [2024-10-09 01:36:18.295800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:19.629 [2024-10-09 01:36:18.295882] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:19.629 [2024-10-09 01:36:18.295890] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:19.629 [2024-10-09 01:36:18.295946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.629 pt2 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.629 "name": "raid_bdev1", 00:16:19.629 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:19.629 "strip_size_kb": 0, 00:16:19.629 "state": "online", 00:16:19.629 "raid_level": "raid1", 00:16:19.629 "superblock": true, 00:16:19.629 "num_base_bdevs": 2, 00:16:19.629 "num_base_bdevs_discovered": 2, 00:16:19.629 "num_base_bdevs_operational": 2, 00:16:19.629 "base_bdevs_list": [ 00:16:19.629 { 00:16:19.629 "name": "pt1", 00:16:19.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.629 "is_configured": true, 00:16:19.629 "data_offset": 256, 00:16:19.629 "data_size": 7936 00:16:19.629 }, 00:16:19.629 { 00:16:19.629 "name": "pt2", 00:16:19.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.629 "is_configured": true, 00:16:19.629 "data_offset": 256, 00:16:19.629 "data_size": 7936 00:16:19.629 } 00:16:19.629 ] 00:16:19.629 }' 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.629 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:19.890 [2024-10-09 01:36:18.759747] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.890 "name": "raid_bdev1", 00:16:19.890 "aliases": [ 00:16:19.890 "82d77230-3291-41c8-81d5-f3dbdcc15abe" 00:16:19.890 ], 00:16:19.890 "product_name": "Raid Volume", 00:16:19.890 "block_size": 4096, 00:16:19.890 "num_blocks": 7936, 00:16:19.890 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:19.890 "md_size": 32, 00:16:19.890 "md_interleave": false, 00:16:19.890 "dif_type": 0, 00:16:19.890 "assigned_rate_limits": { 00:16:19.890 "rw_ios_per_sec": 0, 00:16:19.890 "rw_mbytes_per_sec": 0, 00:16:19.890 "r_mbytes_per_sec": 0, 00:16:19.890 "w_mbytes_per_sec": 0 00:16:19.890 }, 00:16:19.890 "claimed": false, 00:16:19.890 "zoned": false, 00:16:19.890 "supported_io_types": { 00:16:19.890 "read": true, 00:16:19.890 "write": true, 00:16:19.890 "unmap": false, 00:16:19.890 "flush": false, 00:16:19.890 "reset": true, 00:16:19.890 "nvme_admin": false, 00:16:19.890 "nvme_io": false, 00:16:19.890 "nvme_io_md": false, 00:16:19.890 "write_zeroes": true, 00:16:19.890 "zcopy": false, 00:16:19.890 "get_zone_info": false, 00:16:19.890 "zone_management": false, 00:16:19.890 "zone_append": false, 00:16:19.890 "compare": false, 00:16:19.890 "compare_and_write": false, 00:16:19.890 "abort": false, 00:16:19.890 "seek_hole": false, 00:16:19.890 "seek_data": false, 00:16:19.890 "copy": false, 00:16:19.890 "nvme_iov_md": false 00:16:19.890 }, 00:16:19.890 "memory_domains": [ 00:16:19.890 { 00:16:19.890 "dma_device_id": "system", 00:16:19.890 "dma_device_type": 1 00:16:19.890 }, 00:16:19.890 { 00:16:19.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.890 "dma_device_type": 2 00:16:19.890 }, 00:16:19.890 { 00:16:19.890 "dma_device_id": "system", 00:16:19.890 "dma_device_type": 1 00:16:19.890 }, 00:16:19.890 { 00:16:19.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.890 "dma_device_type": 2 00:16:19.890 } 00:16:19.890 ], 00:16:19.890 "driver_specific": { 00:16:19.890 "raid": { 00:16:19.890 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:19.890 "strip_size_kb": 0, 00:16:19.890 "state": "online", 00:16:19.890 "raid_level": "raid1", 00:16:19.890 "superblock": true, 00:16:19.890 "num_base_bdevs": 2, 00:16:19.890 "num_base_bdevs_discovered": 2, 00:16:19.890 "num_base_bdevs_operational": 2, 00:16:19.890 "base_bdevs_list": [ 00:16:19.890 { 00:16:19.890 "name": "pt1", 00:16:19.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.890 "is_configured": true, 00:16:19.890 "data_offset": 256, 00:16:19.890 "data_size": 7936 00:16:19.890 }, 00:16:19.890 { 00:16:19.890 "name": "pt2", 00:16:19.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.890 "is_configured": true, 00:16:19.890 "data_offset": 256, 00:16:19.890 "data_size": 7936 00:16:19.890 } 00:16:19.890 ] 00:16:19.890 } 00:16:19.890 } 00:16:19.890 }' 00:16:19.890 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:20.150 pt2' 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.150 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.151 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:20.151 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:20.151 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:20.151 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.151 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.151 01:36:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:20.151 [2024-10-09 01:36:18.971798] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.151 01:36:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 82d77230-3291-41c8-81d5-f3dbdcc15abe '!=' 82d77230-3291-41c8-81d5-f3dbdcc15abe ']' 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.151 [2024-10-09 01:36:19.019624] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.151 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.411 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.411 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.411 "name": "raid_bdev1", 00:16:20.411 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:20.411 "strip_size_kb": 0, 00:16:20.411 "state": "online", 00:16:20.411 "raid_level": "raid1", 00:16:20.411 "superblock": true, 00:16:20.411 "num_base_bdevs": 2, 00:16:20.411 "num_base_bdevs_discovered": 1, 00:16:20.411 "num_base_bdevs_operational": 1, 00:16:20.411 "base_bdevs_list": [ 00:16:20.411 { 00:16:20.411 "name": null, 00:16:20.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.411 "is_configured": false, 00:16:20.411 "data_offset": 0, 00:16:20.411 "data_size": 7936 00:16:20.411 }, 00:16:20.411 { 00:16:20.411 "name": "pt2", 00:16:20.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.411 "is_configured": true, 00:16:20.411 "data_offset": 256, 00:16:20.411 "data_size": 7936 00:16:20.411 } 00:16:20.411 ] 00:16:20.411 }' 00:16:20.411 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.411 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.671 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.671 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.671 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.671 [2024-10-09 01:36:19.387698] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.671 [2024-10-09 01:36:19.387768] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.671 [2024-10-09 01:36:19.387836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.671 [2024-10-09 01:36:19.387885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.671 [2024-10-09 01:36:19.387917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.672 [2024-10-09 01:36:19.459723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.672 [2024-10-09 01:36:19.459806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.672 [2024-10-09 01:36:19.459833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:20.672 [2024-10-09 01:36:19.459859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.672 [2024-10-09 01:36:19.462093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.672 [2024-10-09 01:36:19.462175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.672 [2024-10-09 01:36:19.462230] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:20.672 [2024-10-09 01:36:19.462273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.672 [2024-10-09 01:36:19.462340] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:20.672 [2024-10-09 01:36:19.462363] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:20.672 [2024-10-09 01:36:19.462437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:20.672 [2024-10-09 01:36:19.462553] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:20.672 [2024-10-09 01:36:19.462588] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:20.672 [2024-10-09 01:36:19.462678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.672 pt2 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.672 "name": "raid_bdev1", 00:16:20.672 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:20.672 "strip_size_kb": 0, 00:16:20.672 "state": "online", 00:16:20.672 "raid_level": "raid1", 00:16:20.672 "superblock": true, 00:16:20.672 "num_base_bdevs": 2, 00:16:20.672 "num_base_bdevs_discovered": 1, 00:16:20.672 "num_base_bdevs_operational": 1, 00:16:20.672 "base_bdevs_list": [ 00:16:20.672 { 00:16:20.672 "name": null, 00:16:20.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.672 "is_configured": false, 00:16:20.672 "data_offset": 256, 00:16:20.672 "data_size": 7936 00:16:20.672 }, 00:16:20.672 { 00:16:20.672 "name": "pt2", 00:16:20.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.672 "is_configured": true, 00:16:20.672 "data_offset": 256, 00:16:20.672 "data_size": 7936 00:16:20.672 } 00:16:20.672 ] 00:16:20.672 }' 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.672 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.242 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.242 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.242 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.242 [2024-10-09 01:36:19.879800] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.242 [2024-10-09 01:36:19.879866] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.242 [2024-10-09 01:36:19.879926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.242 [2024-10-09 01:36:19.879987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.242 [2024-10-09 01:36:19.880026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:21.242 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.243 [2024-10-09 01:36:19.931838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.243 [2024-10-09 01:36:19.931879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.243 [2024-10-09 01:36:19.931900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:21.243 [2024-10-09 01:36:19.931908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.243 [2024-10-09 01:36:19.934018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.243 [2024-10-09 01:36:19.934052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.243 [2024-10-09 01:36:19.934091] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.243 [2024-10-09 01:36:19.934131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.243 [2024-10-09 01:36:19.934220] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:21.243 [2024-10-09 01:36:19.934230] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.243 [2024-10-09 01:36:19.934243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:21.243 [2024-10-09 01:36:19.934283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.243 [2024-10-09 01:36:19.934336] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:21.243 [2024-10-09 01:36:19.934345] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:21.243 [2024-10-09 01:36:19.934404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:21.243 [2024-10-09 01:36:19.934467] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:21.243 [2024-10-09 01:36:19.934484] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:21.243 [2024-10-09 01:36:19.934561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.243 pt1 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.243 "name": "raid_bdev1", 00:16:21.243 "uuid": "82d77230-3291-41c8-81d5-f3dbdcc15abe", 00:16:21.243 "strip_size_kb": 0, 00:16:21.243 "state": "online", 00:16:21.243 "raid_level": "raid1", 00:16:21.243 "superblock": true, 00:16:21.243 "num_base_bdevs": 2, 00:16:21.243 "num_base_bdevs_discovered": 1, 00:16:21.243 "num_base_bdevs_operational": 1, 00:16:21.243 "base_bdevs_list": [ 00:16:21.243 { 00:16:21.243 "name": null, 00:16:21.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.243 "is_configured": false, 00:16:21.243 "data_offset": 256, 00:16:21.243 "data_size": 7936 00:16:21.243 }, 00:16:21.243 { 00:16:21.243 "name": "pt2", 00:16:21.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.243 "is_configured": true, 00:16:21.243 "data_offset": 256, 00:16:21.243 "data_size": 7936 00:16:21.243 } 00:16:21.243 ] 00:16:21.243 }' 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.243 01:36:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.503 01:36:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:21.503 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.503 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.503 01:36:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:21.503 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:21.763 [2024-10-09 01:36:20.432143] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 82d77230-3291-41c8-81d5-f3dbdcc15abe '!=' 82d77230-3291-41c8-81d5-f3dbdcc15abe ']' 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 98953 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98953 ']' 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 98953 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98953 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.763 killing process with pid 98953 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98953' 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 98953 00:16:21.763 [2024-10-09 01:36:20.500152] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.763 [2024-10-09 01:36:20.500218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.763 [2024-10-09 01:36:20.500247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.763 [2024-10-09 01:36:20.500257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:21.763 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 98953 00:16:21.763 [2024-10-09 01:36:20.544336] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:22.335 01:36:20 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:22.335 00:16:22.335 real 0m5.038s 00:16:22.335 user 0m7.981s 00:16:22.335 sys 0m1.144s 00:16:22.335 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.335 01:36:20 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.335 ************************************ 00:16:22.335 END TEST raid_superblock_test_md_separate 00:16:22.335 ************************************ 00:16:22.335 01:36:20 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:22.335 01:36:20 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:22.335 01:36:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:22.335 01:36:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.335 01:36:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.335 ************************************ 00:16:22.335 START TEST raid_rebuild_test_sb_md_separate 00:16:22.335 ************************************ 00:16:22.335 01:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:22.335 01:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:22.335 01:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:22.335 01:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:22.335 01:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:22.335 01:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:22.335 01:36:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=99269 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 99269 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 99269 ']' 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.335 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:22.335 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:22.335 Zero copy mechanism will not be used. 00:16:22.335 [2024-10-09 01:36:21.089864] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:22.335 [2024-10-09 01:36:21.090070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99269 ] 00:16:22.335 [2024-10-09 01:36:21.221131] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:22.595 [2024-10-09 01:36:21.249809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.595 [2024-10-09 01:36:21.320151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.595 [2024-10-09 01:36:21.395657] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.595 [2024-10-09 01:36:21.395811] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.166 BaseBdev1_malloc 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.166 [2024-10-09 01:36:21.943492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:23.166 [2024-10-09 01:36:21.943639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.166 [2024-10-09 01:36:21.943682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:23.166 [2024-10-09 01:36:21.943730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.166 [2024-10-09 01:36:21.945977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.166 [2024-10-09 01:36:21.946049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:23.166 BaseBdev1 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.166 BaseBdev2_malloc 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.166 01:36:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.166 [2024-10-09 01:36:21.997241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:23.166 [2024-10-09 01:36:21.997354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.166 [2024-10-09 01:36:21.997400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:23.166 [2024-10-09 01:36:21.997427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.166 [2024-10-09 01:36:22.001327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.166 [2024-10-09 01:36:22.001458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:23.166 BaseBdev2 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.166 spare_malloc 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.166 spare_delay 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.166 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.166 [2024-10-09 01:36:22.047212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:23.167 [2024-10-09 01:36:22.047276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.167 [2024-10-09 01:36:22.047300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:23.167 [2024-10-09 01:36:22.047313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.167 [2024-10-09 01:36:22.049494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.167 [2024-10-09 01:36:22.049542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:23.167 spare 00:16:23.167 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.167 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:23.167 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.167 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 [2024-10-09 01:36:22.059268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.427 [2024-10-09 01:36:22.061513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.427 [2024-10-09 01:36:22.061687] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:23.427 [2024-10-09 01:36:22.061701] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:23.427 [2024-10-09 01:36:22.061773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:23.427 [2024-10-09 01:36:22.061871] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:23.427 [2024-10-09 01:36:22.061879] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:23.427 [2024-10-09 01:36:22.061972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.427 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.428 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.428 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.428 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.428 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.428 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.428 "name": "raid_bdev1", 00:16:23.428 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:23.428 "strip_size_kb": 0, 00:16:23.428 "state": "online", 00:16:23.428 "raid_level": "raid1", 00:16:23.428 "superblock": true, 00:16:23.428 "num_base_bdevs": 2, 00:16:23.428 "num_base_bdevs_discovered": 2, 00:16:23.428 "num_base_bdevs_operational": 2, 00:16:23.428 "base_bdevs_list": [ 00:16:23.428 { 00:16:23.428 "name": "BaseBdev1", 00:16:23.428 "uuid": "85ba7f45-fdd2-54d3-b7b9-a98b8b22a0ef", 00:16:23.428 "is_configured": true, 00:16:23.428 "data_offset": 256, 00:16:23.428 "data_size": 7936 00:16:23.428 }, 00:16:23.428 { 00:16:23.428 "name": "BaseBdev2", 00:16:23.428 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:23.428 "is_configured": true, 00:16:23.428 "data_offset": 256, 00:16:23.428 "data_size": 7936 00:16:23.428 } 00:16:23.428 ] 00:16:23.428 }' 00:16:23.428 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.428 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.688 [2024-10-09 01:36:22.439587] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:23.688 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.689 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:23.949 [2024-10-09 01:36:22.683434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:23.949 /dev/nbd0 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.949 1+0 records in 00:16:23.949 1+0 records out 00:16:23.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349865 s, 11.7 MB/s 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:23.949 01:36:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:24.520 7936+0 records in 00:16:24.520 7936+0 records out 00:16:24.520 32505856 bytes (33 MB, 31 MiB) copied, 0.546031 s, 59.5 MB/s 00:16:24.520 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:24.520 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.520 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:24.520 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.520 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:24.520 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.520 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.781 [2024-10-09 01:36:23.521601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.781 [2024-10-09 01:36:23.549680] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.781 "name": "raid_bdev1", 00:16:24.781 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:24.781 "strip_size_kb": 0, 00:16:24.781 "state": "online", 00:16:24.781 "raid_level": "raid1", 00:16:24.781 "superblock": true, 00:16:24.781 "num_base_bdevs": 2, 00:16:24.781 "num_base_bdevs_discovered": 1, 00:16:24.781 "num_base_bdevs_operational": 1, 00:16:24.781 "base_bdevs_list": [ 00:16:24.781 { 00:16:24.781 "name": null, 00:16:24.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.781 "is_configured": false, 00:16:24.781 "data_offset": 0, 00:16:24.781 "data_size": 7936 00:16:24.781 }, 00:16:24.781 { 00:16:24.781 "name": "BaseBdev2", 00:16:24.781 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:24.781 "is_configured": true, 00:16:24.781 "data_offset": 256, 00:16:24.781 "data_size": 7936 00:16:24.781 } 00:16:24.781 ] 00:16:24.781 }' 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.781 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.352 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.352 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.352 01:36:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.352 [2024-10-09 01:36:24.001828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.352 [2024-10-09 01:36:24.004670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:16:25.352 [2024-10-09 01:36:24.006840] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:25.352 01:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.352 01:36:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.292 "name": "raid_bdev1", 00:16:26.292 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:26.292 "strip_size_kb": 0, 00:16:26.292 "state": "online", 00:16:26.292 "raid_level": "raid1", 00:16:26.292 "superblock": true, 00:16:26.292 "num_base_bdevs": 2, 00:16:26.292 "num_base_bdevs_discovered": 2, 00:16:26.292 "num_base_bdevs_operational": 2, 00:16:26.292 "process": { 00:16:26.292 "type": "rebuild", 00:16:26.292 "target": "spare", 00:16:26.292 "progress": { 00:16:26.292 "blocks": 2560, 00:16:26.292 "percent": 32 00:16:26.292 } 00:16:26.292 }, 00:16:26.292 "base_bdevs_list": [ 00:16:26.292 { 00:16:26.292 "name": "spare", 00:16:26.292 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:26.292 "is_configured": true, 00:16:26.292 "data_offset": 256, 00:16:26.292 "data_size": 7936 00:16:26.292 }, 00:16:26.292 { 00:16:26.292 "name": "BaseBdev2", 00:16:26.292 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:26.292 "is_configured": true, 00:16:26.292 "data_offset": 256, 00:16:26.292 "data_size": 7936 00:16:26.292 } 00:16:26.292 ] 00:16:26.292 }' 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.292 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.292 [2024-10-09 01:36:25.172459] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.553 [2024-10-09 01:36:25.217270] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:26.553 [2024-10-09 01:36:25.217381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.553 [2024-10-09 01:36:25.217412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.553 [2024-10-09 01:36:25.217438] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.553 "name": "raid_bdev1", 00:16:26.553 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:26.553 "strip_size_kb": 0, 00:16:26.553 "state": "online", 00:16:26.553 "raid_level": "raid1", 00:16:26.553 "superblock": true, 00:16:26.553 "num_base_bdevs": 2, 00:16:26.553 "num_base_bdevs_discovered": 1, 00:16:26.553 "num_base_bdevs_operational": 1, 00:16:26.553 "base_bdevs_list": [ 00:16:26.553 { 00:16:26.553 "name": null, 00:16:26.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.553 "is_configured": false, 00:16:26.553 "data_offset": 0, 00:16:26.553 "data_size": 7936 00:16:26.553 }, 00:16:26.553 { 00:16:26.553 "name": "BaseBdev2", 00:16:26.553 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:26.553 "is_configured": true, 00:16:26.553 "data_offset": 256, 00:16:26.553 "data_size": 7936 00:16:26.553 } 00:16:26.553 ] 00:16:26.553 }' 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.553 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.813 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.074 "name": "raid_bdev1", 00:16:27.074 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:27.074 "strip_size_kb": 0, 00:16:27.074 "state": "online", 00:16:27.074 "raid_level": "raid1", 00:16:27.074 "superblock": true, 00:16:27.074 "num_base_bdevs": 2, 00:16:27.074 "num_base_bdevs_discovered": 1, 00:16:27.074 "num_base_bdevs_operational": 1, 00:16:27.074 "base_bdevs_list": [ 00:16:27.074 { 00:16:27.074 "name": null, 00:16:27.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.074 "is_configured": false, 00:16:27.074 "data_offset": 0, 00:16:27.074 "data_size": 7936 00:16:27.074 }, 00:16:27.074 { 00:16:27.074 "name": "BaseBdev2", 00:16:27.074 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:27.074 "is_configured": true, 00:16:27.074 "data_offset": 256, 00:16:27.074 "data_size": 7936 00:16:27.074 } 00:16:27.074 ] 00:16:27.074 }' 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.074 [2024-10-09 01:36:25.785944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.074 [2024-10-09 01:36:25.788080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d400 00:16:27.074 [2024-10-09 01:36:25.790251] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.074 01:36:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.013 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.013 "name": "raid_bdev1", 00:16:28.013 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:28.013 "strip_size_kb": 0, 00:16:28.013 "state": "online", 00:16:28.013 "raid_level": "raid1", 00:16:28.013 "superblock": true, 00:16:28.013 "num_base_bdevs": 2, 00:16:28.013 "num_base_bdevs_discovered": 2, 00:16:28.013 "num_base_bdevs_operational": 2, 00:16:28.013 "process": { 00:16:28.013 "type": "rebuild", 00:16:28.013 "target": "spare", 00:16:28.013 "progress": { 00:16:28.013 "blocks": 2560, 00:16:28.013 "percent": 32 00:16:28.013 } 00:16:28.013 }, 00:16:28.013 "base_bdevs_list": [ 00:16:28.013 { 00:16:28.013 "name": "spare", 00:16:28.013 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:28.013 "is_configured": true, 00:16:28.013 "data_offset": 256, 00:16:28.013 "data_size": 7936 00:16:28.013 }, 00:16:28.013 { 00:16:28.013 "name": "BaseBdev2", 00:16:28.014 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:28.014 "is_configured": true, 00:16:28.014 "data_offset": 256, 00:16:28.014 "data_size": 7936 00:16:28.014 } 00:16:28.014 ] 00:16:28.014 }' 00:16:28.014 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.014 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.014 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:28.274 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=602 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.274 "name": "raid_bdev1", 00:16:28.274 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:28.274 "strip_size_kb": 0, 00:16:28.274 "state": "online", 00:16:28.274 "raid_level": "raid1", 00:16:28.274 "superblock": true, 00:16:28.274 "num_base_bdevs": 2, 00:16:28.274 "num_base_bdevs_discovered": 2, 00:16:28.274 "num_base_bdevs_operational": 2, 00:16:28.274 "process": { 00:16:28.274 "type": "rebuild", 00:16:28.274 "target": "spare", 00:16:28.274 "progress": { 00:16:28.274 "blocks": 2816, 00:16:28.274 "percent": 35 00:16:28.274 } 00:16:28.274 }, 00:16:28.274 "base_bdevs_list": [ 00:16:28.274 { 00:16:28.274 "name": "spare", 00:16:28.274 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:28.274 "is_configured": true, 00:16:28.274 "data_offset": 256, 00:16:28.274 "data_size": 7936 00:16:28.274 }, 00:16:28.274 { 00:16:28.274 "name": "BaseBdev2", 00:16:28.274 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:28.274 "is_configured": true, 00:16:28.274 "data_offset": 256, 00:16:28.274 "data_size": 7936 00:16:28.274 } 00:16:28.274 ] 00:16:28.274 }' 00:16:28.274 01:36:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.274 01:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.274 01:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.274 01:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.274 01:36:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.214 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.475 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.475 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.475 "name": "raid_bdev1", 00:16:29.475 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:29.475 "strip_size_kb": 0, 00:16:29.475 "state": "online", 00:16:29.475 "raid_level": "raid1", 00:16:29.475 "superblock": true, 00:16:29.475 "num_base_bdevs": 2, 00:16:29.475 "num_base_bdevs_discovered": 2, 00:16:29.475 "num_base_bdevs_operational": 2, 00:16:29.475 "process": { 00:16:29.475 "type": "rebuild", 00:16:29.475 "target": "spare", 00:16:29.475 "progress": { 00:16:29.475 "blocks": 5632, 00:16:29.475 "percent": 70 00:16:29.475 } 00:16:29.475 }, 00:16:29.475 "base_bdevs_list": [ 00:16:29.475 { 00:16:29.475 "name": "spare", 00:16:29.475 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:29.475 "is_configured": true, 00:16:29.475 "data_offset": 256, 00:16:29.475 "data_size": 7936 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "name": "BaseBdev2", 00:16:29.475 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:29.475 "is_configured": true, 00:16:29.475 "data_offset": 256, 00:16:29.475 "data_size": 7936 00:16:29.475 } 00:16:29.475 ] 00:16:29.475 }' 00:16:29.475 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.475 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.475 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.475 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.475 01:36:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.045 [2024-10-09 01:36:28.914440] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:30.045 [2024-10-09 01:36:28.914598] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:30.045 [2024-10-09 01:36:28.914747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.616 "name": "raid_bdev1", 00:16:30.616 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:30.616 "strip_size_kb": 0, 00:16:30.616 "state": "online", 00:16:30.616 "raid_level": "raid1", 00:16:30.616 "superblock": true, 00:16:30.616 "num_base_bdevs": 2, 00:16:30.616 "num_base_bdevs_discovered": 2, 00:16:30.616 "num_base_bdevs_operational": 2, 00:16:30.616 "base_bdevs_list": [ 00:16:30.616 { 00:16:30.616 "name": "spare", 00:16:30.616 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:30.616 "is_configured": true, 00:16:30.616 "data_offset": 256, 00:16:30.616 "data_size": 7936 00:16:30.616 }, 00:16:30.616 { 00:16:30.616 "name": "BaseBdev2", 00:16:30.616 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:30.616 "is_configured": true, 00:16:30.616 "data_offset": 256, 00:16:30.616 "data_size": 7936 00:16:30.616 } 00:16:30.616 ] 00:16:30.616 }' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.616 "name": "raid_bdev1", 00:16:30.616 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:30.616 "strip_size_kb": 0, 00:16:30.616 "state": "online", 00:16:30.616 "raid_level": "raid1", 00:16:30.616 "superblock": true, 00:16:30.616 "num_base_bdevs": 2, 00:16:30.616 "num_base_bdevs_discovered": 2, 00:16:30.616 "num_base_bdevs_operational": 2, 00:16:30.616 "base_bdevs_list": [ 00:16:30.616 { 00:16:30.616 "name": "spare", 00:16:30.616 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:30.616 "is_configured": true, 00:16:30.616 "data_offset": 256, 00:16:30.616 "data_size": 7936 00:16:30.616 }, 00:16:30.616 { 00:16:30.616 "name": "BaseBdev2", 00:16:30.616 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:30.616 "is_configured": true, 00:16:30.616 "data_offset": 256, 00:16:30.616 "data_size": 7936 00:16:30.616 } 00:16:30.616 ] 00:16:30.616 }' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.616 "name": "raid_bdev1", 00:16:30.616 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:30.616 "strip_size_kb": 0, 00:16:30.616 "state": "online", 00:16:30.616 "raid_level": "raid1", 00:16:30.616 "superblock": true, 00:16:30.616 "num_base_bdevs": 2, 00:16:30.616 "num_base_bdevs_discovered": 2, 00:16:30.616 "num_base_bdevs_operational": 2, 00:16:30.616 "base_bdevs_list": [ 00:16:30.616 { 00:16:30.616 "name": "spare", 00:16:30.616 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:30.616 "is_configured": true, 00:16:30.616 "data_offset": 256, 00:16:30.616 "data_size": 7936 00:16:30.616 }, 00:16:30.616 { 00:16:30.616 "name": "BaseBdev2", 00:16:30.616 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:30.616 "is_configured": true, 00:16:30.616 "data_offset": 256, 00:16:30.616 "data_size": 7936 00:16:30.616 } 00:16:30.616 ] 00:16:30.616 }' 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.616 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.187 [2024-10-09 01:36:29.839148] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.187 [2024-10-09 01:36:29.839238] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.187 [2024-10-09 01:36:29.839336] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.187 [2024-10-09 01:36:29.839424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.187 [2024-10-09 01:36:29.839468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:31.187 01:36:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:31.447 /dev/nbd0 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.447 1+0 records in 00:16:31.447 1+0 records out 00:16:31.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388513 s, 10.5 MB/s 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:31.447 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:31.708 /dev/nbd1 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.708 1+0 records in 00:16:31.708 1+0 records out 00:16:31.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581808 s, 7.0 MB/s 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.708 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.969 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.229 [2024-10-09 01:36:30.926145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.229 [2024-10-09 01:36:30.926277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.229 [2024-10-09 01:36:30.926322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:32.229 [2024-10-09 01:36:30.926350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.229 [2024-10-09 01:36:30.928565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.229 [2024-10-09 01:36:30.928632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.229 [2024-10-09 01:36:30.928719] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:32.229 [2024-10-09 01:36:30.928806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.229 [2024-10-09 01:36:30.928945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.229 spare 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.229 01:36:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.229 [2024-10-09 01:36:31.029016] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:32.229 [2024-10-09 01:36:31.029048] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:32.229 [2024-10-09 01:36:31.029164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:16:32.229 [2024-10-09 01:36:31.029287] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:32.229 [2024-10-09 01:36:31.029296] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:32.229 [2024-10-09 01:36:31.029410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.229 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.230 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.230 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.230 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.230 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.230 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.230 "name": "raid_bdev1", 00:16:32.230 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:32.230 "strip_size_kb": 0, 00:16:32.230 "state": "online", 00:16:32.230 "raid_level": "raid1", 00:16:32.230 "superblock": true, 00:16:32.230 "num_base_bdevs": 2, 00:16:32.230 "num_base_bdevs_discovered": 2, 00:16:32.230 "num_base_bdevs_operational": 2, 00:16:32.230 "base_bdevs_list": [ 00:16:32.230 { 00:16:32.230 "name": "spare", 00:16:32.230 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:32.230 "is_configured": true, 00:16:32.230 "data_offset": 256, 00:16:32.230 "data_size": 7936 00:16:32.230 }, 00:16:32.230 { 00:16:32.230 "name": "BaseBdev2", 00:16:32.230 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:32.230 "is_configured": true, 00:16:32.230 "data_offset": 256, 00:16:32.230 "data_size": 7936 00:16:32.230 } 00:16:32.230 ] 00:16:32.230 }' 00:16:32.230 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.230 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.800 "name": "raid_bdev1", 00:16:32.800 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:32.800 "strip_size_kb": 0, 00:16:32.800 "state": "online", 00:16:32.800 "raid_level": "raid1", 00:16:32.800 "superblock": true, 00:16:32.800 "num_base_bdevs": 2, 00:16:32.800 "num_base_bdevs_discovered": 2, 00:16:32.800 "num_base_bdevs_operational": 2, 00:16:32.800 "base_bdevs_list": [ 00:16:32.800 { 00:16:32.800 "name": "spare", 00:16:32.800 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:32.800 "is_configured": true, 00:16:32.800 "data_offset": 256, 00:16:32.800 "data_size": 7936 00:16:32.800 }, 00:16:32.800 { 00:16:32.800 "name": "BaseBdev2", 00:16:32.800 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:32.800 "is_configured": true, 00:16:32.800 "data_offset": 256, 00:16:32.800 "data_size": 7936 00:16:32.800 } 00:16:32.800 ] 00:16:32.800 }' 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.800 [2024-10-09 01:36:31.662351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.800 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.061 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.061 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.061 "name": "raid_bdev1", 00:16:33.061 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:33.061 "strip_size_kb": 0, 00:16:33.061 "state": "online", 00:16:33.061 "raid_level": "raid1", 00:16:33.061 "superblock": true, 00:16:33.061 "num_base_bdevs": 2, 00:16:33.061 "num_base_bdevs_discovered": 1, 00:16:33.061 "num_base_bdevs_operational": 1, 00:16:33.061 "base_bdevs_list": [ 00:16:33.061 { 00:16:33.061 "name": null, 00:16:33.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.061 "is_configured": false, 00:16:33.061 "data_offset": 0, 00:16:33.061 "data_size": 7936 00:16:33.061 }, 00:16:33.061 { 00:16:33.061 "name": "BaseBdev2", 00:16:33.061 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:33.061 "is_configured": true, 00:16:33.061 "data_offset": 256, 00:16:33.061 "data_size": 7936 00:16:33.061 } 00:16:33.061 ] 00:16:33.061 }' 00:16:33.061 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.061 01:36:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.321 01:36:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.321 01:36:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.321 01:36:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.321 [2024-10-09 01:36:32.090474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.321 [2024-10-09 01:36:32.090662] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:33.321 [2024-10-09 01:36:32.090728] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:33.321 [2024-10-09 01:36:32.090788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.321 [2024-10-09 01:36:32.093476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:16:33.321 [2024-10-09 01:36:32.095546] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.321 01:36:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.321 01:36:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.261 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.526 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.526 "name": "raid_bdev1", 00:16:34.526 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:34.526 "strip_size_kb": 0, 00:16:34.526 "state": "online", 00:16:34.526 "raid_level": "raid1", 00:16:34.526 "superblock": true, 00:16:34.526 "num_base_bdevs": 2, 00:16:34.526 "num_base_bdevs_discovered": 2, 00:16:34.526 "num_base_bdevs_operational": 2, 00:16:34.526 "process": { 00:16:34.526 "type": "rebuild", 00:16:34.526 "target": "spare", 00:16:34.526 "progress": { 00:16:34.526 "blocks": 2560, 00:16:34.526 "percent": 32 00:16:34.526 } 00:16:34.526 }, 00:16:34.526 "base_bdevs_list": [ 00:16:34.526 { 00:16:34.526 "name": "spare", 00:16:34.526 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:34.526 "is_configured": true, 00:16:34.526 "data_offset": 256, 00:16:34.526 "data_size": 7936 00:16:34.526 }, 00:16:34.526 { 00:16:34.526 "name": "BaseBdev2", 00:16:34.526 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:34.526 "is_configured": true, 00:16:34.526 "data_offset": 256, 00:16:34.526 "data_size": 7936 00:16:34.526 } 00:16:34.526 ] 00:16:34.526 }' 00:16:34.526 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.526 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.526 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.527 [2024-10-09 01:36:33.262374] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.527 [2024-10-09 01:36:33.305229] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:34.527 [2024-10-09 01:36:33.305354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.527 [2024-10-09 01:36:33.305371] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:34.527 [2024-10-09 01:36:33.305382] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.527 "name": "raid_bdev1", 00:16:34.527 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:34.527 "strip_size_kb": 0, 00:16:34.527 "state": "online", 00:16:34.527 "raid_level": "raid1", 00:16:34.527 "superblock": true, 00:16:34.527 "num_base_bdevs": 2, 00:16:34.527 "num_base_bdevs_discovered": 1, 00:16:34.527 "num_base_bdevs_operational": 1, 00:16:34.527 "base_bdevs_list": [ 00:16:34.527 { 00:16:34.527 "name": null, 00:16:34.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.527 "is_configured": false, 00:16:34.527 "data_offset": 0, 00:16:34.527 "data_size": 7936 00:16:34.527 }, 00:16:34.527 { 00:16:34.527 "name": "BaseBdev2", 00:16:34.527 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:34.527 "is_configured": true, 00:16:34.527 "data_offset": 256, 00:16:34.527 "data_size": 7936 00:16:34.527 } 00:16:34.527 ] 00:16:34.527 }' 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.527 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.143 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.143 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.143 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.143 [2024-10-09 01:36:33.737962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.143 [2024-10-09 01:36:33.738077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.143 [2024-10-09 01:36:33.738131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:35.143 [2024-10-09 01:36:33.738163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.143 [2024-10-09 01:36:33.738415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.143 [2024-10-09 01:36:33.738468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.143 [2024-10-09 01:36:33.738553] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:35.143 [2024-10-09 01:36:33.738598] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.143 [2024-10-09 01:36:33.738636] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.143 [2024-10-09 01:36:33.738730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.143 [2024-10-09 01:36:33.741128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:16:35.143 [2024-10-09 01:36:33.743212] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.143 spare 00:16:35.143 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.143 01:36:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:36.084 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.084 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.085 "name": "raid_bdev1", 00:16:36.085 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:36.085 "strip_size_kb": 0, 00:16:36.085 "state": "online", 00:16:36.085 "raid_level": "raid1", 00:16:36.085 "superblock": true, 00:16:36.085 "num_base_bdevs": 2, 00:16:36.085 "num_base_bdevs_discovered": 2, 00:16:36.085 "num_base_bdevs_operational": 2, 00:16:36.085 "process": { 00:16:36.085 "type": "rebuild", 00:16:36.085 "target": "spare", 00:16:36.085 "progress": { 00:16:36.085 "blocks": 2560, 00:16:36.085 "percent": 32 00:16:36.085 } 00:16:36.085 }, 00:16:36.085 "base_bdevs_list": [ 00:16:36.085 { 00:16:36.085 "name": "spare", 00:16:36.085 "uuid": "801ab8b8-4737-5173-9d9d-183caf81000f", 00:16:36.085 "is_configured": true, 00:16:36.085 "data_offset": 256, 00:16:36.085 "data_size": 7936 00:16:36.085 }, 00:16:36.085 { 00:16:36.085 "name": "BaseBdev2", 00:16:36.085 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:36.085 "is_configured": true, 00:16:36.085 "data_offset": 256, 00:16:36.085 "data_size": 7936 00:16:36.085 } 00:16:36.085 ] 00:16:36.085 }' 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.085 [2024-10-09 01:36:34.908074] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.085 [2024-10-09 01:36:34.952739] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:36.085 [2024-10-09 01:36:34.952801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.085 [2024-10-09 01:36:34.952819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.085 [2024-10-09 01:36:34.952827] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.085 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.345 01:36:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.345 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.345 "name": "raid_bdev1", 00:16:36.345 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:36.345 "strip_size_kb": 0, 00:16:36.345 "state": "online", 00:16:36.345 "raid_level": "raid1", 00:16:36.345 "superblock": true, 00:16:36.345 "num_base_bdevs": 2, 00:16:36.345 "num_base_bdevs_discovered": 1, 00:16:36.345 "num_base_bdevs_operational": 1, 00:16:36.345 "base_bdevs_list": [ 00:16:36.345 { 00:16:36.345 "name": null, 00:16:36.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.345 "is_configured": false, 00:16:36.345 "data_offset": 0, 00:16:36.345 "data_size": 7936 00:16:36.345 }, 00:16:36.345 { 00:16:36.345 "name": "BaseBdev2", 00:16:36.345 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:36.345 "is_configured": true, 00:16:36.345 "data_offset": 256, 00:16:36.345 "data_size": 7936 00:16:36.345 } 00:16:36.345 ] 00:16:36.345 }' 00:16:36.345 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.345 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.605 "name": "raid_bdev1", 00:16:36.605 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:36.605 "strip_size_kb": 0, 00:16:36.605 "state": "online", 00:16:36.605 "raid_level": "raid1", 00:16:36.605 "superblock": true, 00:16:36.605 "num_base_bdevs": 2, 00:16:36.605 "num_base_bdevs_discovered": 1, 00:16:36.605 "num_base_bdevs_operational": 1, 00:16:36.605 "base_bdevs_list": [ 00:16:36.605 { 00:16:36.605 "name": null, 00:16:36.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.605 "is_configured": false, 00:16:36.605 "data_offset": 0, 00:16:36.605 "data_size": 7936 00:16:36.605 }, 00:16:36.605 { 00:16:36.605 "name": "BaseBdev2", 00:16:36.605 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:36.605 "is_configured": true, 00:16:36.605 "data_offset": 256, 00:16:36.605 "data_size": 7936 00:16:36.605 } 00:16:36.605 ] 00:16:36.605 }' 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.605 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.865 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.865 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:36.865 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.866 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.866 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.866 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:36.866 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.866 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.866 [2024-10-09 01:36:35.536986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:36.866 [2024-10-09 01:36:35.537037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.866 [2024-10-09 01:36:35.537061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:36.866 [2024-10-09 01:36:35.537070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.866 [2024-10-09 01:36:35.537268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.866 [2024-10-09 01:36:35.537279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.866 [2024-10-09 01:36:35.537332] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:36.866 [2024-10-09 01:36:35.537354] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:36.866 [2024-10-09 01:36:35.537365] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:36.866 [2024-10-09 01:36:35.537384] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:36.866 BaseBdev1 00:16:36.866 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.866 01:36:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.812 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.812 "name": "raid_bdev1", 00:16:37.812 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:37.812 "strip_size_kb": 0, 00:16:37.813 "state": "online", 00:16:37.813 "raid_level": "raid1", 00:16:37.813 "superblock": true, 00:16:37.813 "num_base_bdevs": 2, 00:16:37.813 "num_base_bdevs_discovered": 1, 00:16:37.813 "num_base_bdevs_operational": 1, 00:16:37.813 "base_bdevs_list": [ 00:16:37.813 { 00:16:37.813 "name": null, 00:16:37.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.813 "is_configured": false, 00:16:37.813 "data_offset": 0, 00:16:37.813 "data_size": 7936 00:16:37.813 }, 00:16:37.813 { 00:16:37.813 "name": "BaseBdev2", 00:16:37.813 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:37.813 "is_configured": true, 00:16:37.813 "data_offset": 256, 00:16:37.813 "data_size": 7936 00:16:37.813 } 00:16:37.813 ] 00:16:37.813 }' 00:16:37.813 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.813 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.383 01:36:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.383 "name": "raid_bdev1", 00:16:38.383 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:38.383 "strip_size_kb": 0, 00:16:38.383 "state": "online", 00:16:38.383 "raid_level": "raid1", 00:16:38.383 "superblock": true, 00:16:38.383 "num_base_bdevs": 2, 00:16:38.383 "num_base_bdevs_discovered": 1, 00:16:38.383 "num_base_bdevs_operational": 1, 00:16:38.383 "base_bdevs_list": [ 00:16:38.383 { 00:16:38.383 "name": null, 00:16:38.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.383 "is_configured": false, 00:16:38.383 "data_offset": 0, 00:16:38.383 "data_size": 7936 00:16:38.383 }, 00:16:38.383 { 00:16:38.383 "name": "BaseBdev2", 00:16:38.383 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:38.383 "is_configured": true, 00:16:38.383 "data_offset": 256, 00:16:38.383 "data_size": 7936 00:16:38.383 } 00:16:38.383 ] 00:16:38.383 }' 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.383 [2024-10-09 01:36:37.133423] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.383 [2024-10-09 01:36:37.133549] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:38.383 [2024-10-09 01:36:37.133564] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:38.383 request: 00:16:38.383 { 00:16:38.383 "base_bdev": "BaseBdev1", 00:16:38.383 "raid_bdev": "raid_bdev1", 00:16:38.383 "method": "bdev_raid_add_base_bdev", 00:16:38.383 "req_id": 1 00:16:38.383 } 00:16:38.383 Got JSON-RPC error response 00:16:38.383 response: 00:16:38.383 { 00:16:38.383 "code": -22, 00:16:38.383 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:38.383 } 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.383 01:36:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.336 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.336 "name": "raid_bdev1", 00:16:39.336 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:39.336 "strip_size_kb": 0, 00:16:39.336 "state": "online", 00:16:39.336 "raid_level": "raid1", 00:16:39.336 "superblock": true, 00:16:39.336 "num_base_bdevs": 2, 00:16:39.336 "num_base_bdevs_discovered": 1, 00:16:39.336 "num_base_bdevs_operational": 1, 00:16:39.336 "base_bdevs_list": [ 00:16:39.336 { 00:16:39.336 "name": null, 00:16:39.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.337 "is_configured": false, 00:16:39.337 "data_offset": 0, 00:16:39.337 "data_size": 7936 00:16:39.337 }, 00:16:39.337 { 00:16:39.337 "name": "BaseBdev2", 00:16:39.337 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:39.337 "is_configured": true, 00:16:39.337 "data_offset": 256, 00:16:39.337 "data_size": 7936 00:16:39.337 } 00:16:39.337 ] 00:16:39.337 }' 00:16:39.337 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.337 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.907 "name": "raid_bdev1", 00:16:39.907 "uuid": "7139637d-f40c-41f0-ab75-12b4f1a22ee5", 00:16:39.907 "strip_size_kb": 0, 00:16:39.907 "state": "online", 00:16:39.907 "raid_level": "raid1", 00:16:39.907 "superblock": true, 00:16:39.907 "num_base_bdevs": 2, 00:16:39.907 "num_base_bdevs_discovered": 1, 00:16:39.907 "num_base_bdevs_operational": 1, 00:16:39.907 "base_bdevs_list": [ 00:16:39.907 { 00:16:39.907 "name": null, 00:16:39.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.907 "is_configured": false, 00:16:39.907 "data_offset": 0, 00:16:39.907 "data_size": 7936 00:16:39.907 }, 00:16:39.907 { 00:16:39.907 "name": "BaseBdev2", 00:16:39.907 "uuid": "280fcfe4-6686-5984-af07-83f536de1c6c", 00:16:39.907 "is_configured": true, 00:16:39.907 "data_offset": 256, 00:16:39.907 "data_size": 7936 00:16:39.907 } 00:16:39.907 ] 00:16:39.907 }' 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 99269 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 99269 ']' 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 99269 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99269 00:16:39.907 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:39.907 killing process with pid 99269 00:16:39.907 Received shutdown signal, test time was about 60.000000 seconds 00:16:39.907 00:16:39.907 Latency(us) 00:16:39.907 [2024-10-09T01:36:38.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.907 [2024-10-09T01:36:38.800Z] =================================================================================================================== 00:16:39.908 [2024-10-09T01:36:38.801Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:39.908 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:39.908 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99269' 00:16:39.908 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 99269 00:16:39.908 [2024-10-09 01:36:38.744962] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:39.908 [2024-10-09 01:36:38.745083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.908 [2024-10-09 01:36:38.745122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.908 [2024-10-09 01:36:38.745134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:39.908 01:36:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 99269 00:16:40.168 [2024-10-09 01:36:38.805787] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.429 01:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:40.429 00:16:40.429 real 0m18.167s 00:16:40.429 user 0m23.844s 00:16:40.429 sys 0m2.654s 00:16:40.429 01:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:40.429 ************************************ 00:16:40.429 END TEST raid_rebuild_test_sb_md_separate 00:16:40.429 ************************************ 00:16:40.429 01:36:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.429 01:36:39 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:40.429 01:36:39 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:40.429 01:36:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:40.429 01:36:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:40.429 01:36:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.429 ************************************ 00:16:40.429 START TEST raid_state_function_test_sb_md_interleaved 00:16:40.429 ************************************ 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:40.429 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=99944 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99944' 00:16:40.430 Process raid pid: 99944 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 99944 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99944 ']' 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.430 01:36:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:40.690 [2024-10-09 01:36:39.343459] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:40.690 [2024-10-09 01:36:39.343723] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.690 [2024-10-09 01:36:39.477484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:40.690 [2024-10-09 01:36:39.507133] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.690 [2024-10-09 01:36:39.579958] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.950 [2024-10-09 01:36:39.658302] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.950 [2024-10-09 01:36:39.658341] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.520 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.521 [2024-10-09 01:36:40.211994] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.521 [2024-10-09 01:36:40.212050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.521 [2024-10-09 01:36:40.212063] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.521 [2024-10-09 01:36:40.212070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.521 "name": "Existed_Raid", 00:16:41.521 "uuid": "d1b9af50-7193-4429-8623-70e5da41e47b", 00:16:41.521 "strip_size_kb": 0, 00:16:41.521 "state": "configuring", 00:16:41.521 "raid_level": "raid1", 00:16:41.521 "superblock": true, 00:16:41.521 "num_base_bdevs": 2, 00:16:41.521 "num_base_bdevs_discovered": 0, 00:16:41.521 "num_base_bdevs_operational": 2, 00:16:41.521 "base_bdevs_list": [ 00:16:41.521 { 00:16:41.521 "name": "BaseBdev1", 00:16:41.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.521 "is_configured": false, 00:16:41.521 "data_offset": 0, 00:16:41.521 "data_size": 0 00:16:41.521 }, 00:16:41.521 { 00:16:41.521 "name": "BaseBdev2", 00:16:41.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.521 "is_configured": false, 00:16:41.521 "data_offset": 0, 00:16:41.521 "data_size": 0 00:16:41.521 } 00:16:41.521 ] 00:16:41.521 }' 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.521 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.781 [2024-10-09 01:36:40.652004] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.781 [2024-10-09 01:36:40.652103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:41.781 [2024-10-09 01:36:40.664005] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.781 [2024-10-09 01:36:40.664075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.781 [2024-10-09 01:36:40.664102] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.781 [2024-10-09 01:36:40.664121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.781 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.041 [2024-10-09 01:36:40.691825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.041 BaseBdev1 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.041 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.041 [ 00:16:42.041 { 00:16:42.041 "name": "BaseBdev1", 00:16:42.041 "aliases": [ 00:16:42.041 "c7129cfb-2866-4a7b-98d4-77adc69643c7" 00:16:42.041 ], 00:16:42.041 "product_name": "Malloc disk", 00:16:42.041 "block_size": 4128, 00:16:42.041 "num_blocks": 8192, 00:16:42.041 "uuid": "c7129cfb-2866-4a7b-98d4-77adc69643c7", 00:16:42.041 "md_size": 32, 00:16:42.041 "md_interleave": true, 00:16:42.041 "dif_type": 0, 00:16:42.041 "assigned_rate_limits": { 00:16:42.041 "rw_ios_per_sec": 0, 00:16:42.041 "rw_mbytes_per_sec": 0, 00:16:42.041 "r_mbytes_per_sec": 0, 00:16:42.041 "w_mbytes_per_sec": 0 00:16:42.041 }, 00:16:42.041 "claimed": true, 00:16:42.041 "claim_type": "exclusive_write", 00:16:42.041 "zoned": false, 00:16:42.041 "supported_io_types": { 00:16:42.042 "read": true, 00:16:42.042 "write": true, 00:16:42.042 "unmap": true, 00:16:42.042 "flush": true, 00:16:42.042 "reset": true, 00:16:42.042 "nvme_admin": false, 00:16:42.042 "nvme_io": false, 00:16:42.042 "nvme_io_md": false, 00:16:42.042 "write_zeroes": true, 00:16:42.042 "zcopy": true, 00:16:42.042 "get_zone_info": false, 00:16:42.042 "zone_management": false, 00:16:42.042 "zone_append": false, 00:16:42.042 "compare": false, 00:16:42.042 "compare_and_write": false, 00:16:42.042 "abort": true, 00:16:42.042 "seek_hole": false, 00:16:42.042 "seek_data": false, 00:16:42.042 "copy": true, 00:16:42.042 "nvme_iov_md": false 00:16:42.042 }, 00:16:42.042 "memory_domains": [ 00:16:42.042 { 00:16:42.042 "dma_device_id": "system", 00:16:42.042 "dma_device_type": 1 00:16:42.042 }, 00:16:42.042 { 00:16:42.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.042 "dma_device_type": 2 00:16:42.042 } 00:16:42.042 ], 00:16:42.042 "driver_specific": {} 00:16:42.042 } 00:16:42.042 ] 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.042 "name": "Existed_Raid", 00:16:42.042 "uuid": "d12d6931-56a1-47bd-8f59-cb12eece88ee", 00:16:42.042 "strip_size_kb": 0, 00:16:42.042 "state": "configuring", 00:16:42.042 "raid_level": "raid1", 00:16:42.042 "superblock": true, 00:16:42.042 "num_base_bdevs": 2, 00:16:42.042 "num_base_bdevs_discovered": 1, 00:16:42.042 "num_base_bdevs_operational": 2, 00:16:42.042 "base_bdevs_list": [ 00:16:42.042 { 00:16:42.042 "name": "BaseBdev1", 00:16:42.042 "uuid": "c7129cfb-2866-4a7b-98d4-77adc69643c7", 00:16:42.042 "is_configured": true, 00:16:42.042 "data_offset": 256, 00:16:42.042 "data_size": 7936 00:16:42.042 }, 00:16:42.042 { 00:16:42.042 "name": "BaseBdev2", 00:16:42.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.042 "is_configured": false, 00:16:42.042 "data_offset": 0, 00:16:42.042 "data_size": 0 00:16:42.042 } 00:16:42.042 ] 00:16:42.042 }' 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.042 01:36:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.302 [2024-10-09 01:36:41.163973] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:42.302 [2024-10-09 01:36:41.164015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.302 [2024-10-09 01:36:41.176040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.302 [2024-10-09 01:36:41.178192] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.302 [2024-10-09 01:36:41.178260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.302 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.562 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.562 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.562 "name": "Existed_Raid", 00:16:42.562 "uuid": "fabc703e-1c79-4137-ac7d-a2faaa78bcbb", 00:16:42.562 "strip_size_kb": 0, 00:16:42.562 "state": "configuring", 00:16:42.562 "raid_level": "raid1", 00:16:42.562 "superblock": true, 00:16:42.562 "num_base_bdevs": 2, 00:16:42.562 "num_base_bdevs_discovered": 1, 00:16:42.562 "num_base_bdevs_operational": 2, 00:16:42.562 "base_bdevs_list": [ 00:16:42.562 { 00:16:42.562 "name": "BaseBdev1", 00:16:42.562 "uuid": "c7129cfb-2866-4a7b-98d4-77adc69643c7", 00:16:42.562 "is_configured": true, 00:16:42.562 "data_offset": 256, 00:16:42.562 "data_size": 7936 00:16:42.562 }, 00:16:42.562 { 00:16:42.562 "name": "BaseBdev2", 00:16:42.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.562 "is_configured": false, 00:16:42.562 "data_offset": 0, 00:16:42.562 "data_size": 0 00:16:42.562 } 00:16:42.562 ] 00:16:42.562 }' 00:16:42.562 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.562 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 [2024-10-09 01:36:41.543414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.823 [2024-10-09 01:36:41.543899] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:42.823 [2024-10-09 01:36:41.543946] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:42.823 [2024-10-09 01:36:41.544180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:42.823 [2024-10-09 01:36:41.544377] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:42.823 [2024-10-09 01:36:41.544400] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:42.823 BaseBdev2 00:16:42.823 [2024-10-09 01:36:41.544521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.823 [ 00:16:42.823 { 00:16:42.823 "name": "BaseBdev2", 00:16:42.823 "aliases": [ 00:16:42.823 "98d1a5e4-4701-4c51-92cd-4c8fc751813f" 00:16:42.823 ], 00:16:42.823 "product_name": "Malloc disk", 00:16:42.823 "block_size": 4128, 00:16:42.823 "num_blocks": 8192, 00:16:42.823 "uuid": "98d1a5e4-4701-4c51-92cd-4c8fc751813f", 00:16:42.823 "md_size": 32, 00:16:42.823 "md_interleave": true, 00:16:42.823 "dif_type": 0, 00:16:42.823 "assigned_rate_limits": { 00:16:42.823 "rw_ios_per_sec": 0, 00:16:42.823 "rw_mbytes_per_sec": 0, 00:16:42.823 "r_mbytes_per_sec": 0, 00:16:42.823 "w_mbytes_per_sec": 0 00:16:42.823 }, 00:16:42.823 "claimed": true, 00:16:42.823 "claim_type": "exclusive_write", 00:16:42.823 "zoned": false, 00:16:42.823 "supported_io_types": { 00:16:42.823 "read": true, 00:16:42.823 "write": true, 00:16:42.823 "unmap": true, 00:16:42.823 "flush": true, 00:16:42.823 "reset": true, 00:16:42.823 "nvme_admin": false, 00:16:42.823 "nvme_io": false, 00:16:42.823 "nvme_io_md": false, 00:16:42.823 "write_zeroes": true, 00:16:42.823 "zcopy": true, 00:16:42.823 "get_zone_info": false, 00:16:42.823 "zone_management": false, 00:16:42.823 "zone_append": false, 00:16:42.823 "compare": false, 00:16:42.823 "compare_and_write": false, 00:16:42.823 "abort": true, 00:16:42.823 "seek_hole": false, 00:16:42.823 "seek_data": false, 00:16:42.823 "copy": true, 00:16:42.823 "nvme_iov_md": false 00:16:42.823 }, 00:16:42.823 "memory_domains": [ 00:16:42.823 { 00:16:42.823 "dma_device_id": "system", 00:16:42.823 "dma_device_type": 1 00:16:42.823 }, 00:16:42.823 { 00:16:42.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.823 "dma_device_type": 2 00:16:42.823 } 00:16:42.823 ], 00:16:42.823 "driver_specific": {} 00:16:42.823 } 00:16:42.823 ] 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.823 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.824 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:42.824 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.824 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.824 "name": "Existed_Raid", 00:16:42.824 "uuid": "fabc703e-1c79-4137-ac7d-a2faaa78bcbb", 00:16:42.824 "strip_size_kb": 0, 00:16:42.824 "state": "online", 00:16:42.824 "raid_level": "raid1", 00:16:42.824 "superblock": true, 00:16:42.824 "num_base_bdevs": 2, 00:16:42.824 "num_base_bdevs_discovered": 2, 00:16:42.824 "num_base_bdevs_operational": 2, 00:16:42.824 "base_bdevs_list": [ 00:16:42.824 { 00:16:42.824 "name": "BaseBdev1", 00:16:42.824 "uuid": "c7129cfb-2866-4a7b-98d4-77adc69643c7", 00:16:42.824 "is_configured": true, 00:16:42.824 "data_offset": 256, 00:16:42.824 "data_size": 7936 00:16:42.824 }, 00:16:42.824 { 00:16:42.824 "name": "BaseBdev2", 00:16:42.824 "uuid": "98d1a5e4-4701-4c51-92cd-4c8fc751813f", 00:16:42.824 "is_configured": true, 00:16:42.824 "data_offset": 256, 00:16:42.824 "data_size": 7936 00:16:42.824 } 00:16:42.824 ] 00:16:42.824 }' 00:16:42.824 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.824 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:43.394 01:36:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.394 [2024-10-09 01:36:41.995829] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.394 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.394 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:43.394 "name": "Existed_Raid", 00:16:43.394 "aliases": [ 00:16:43.394 "fabc703e-1c79-4137-ac7d-a2faaa78bcbb" 00:16:43.394 ], 00:16:43.394 "product_name": "Raid Volume", 00:16:43.394 "block_size": 4128, 00:16:43.394 "num_blocks": 7936, 00:16:43.394 "uuid": "fabc703e-1c79-4137-ac7d-a2faaa78bcbb", 00:16:43.394 "md_size": 32, 00:16:43.394 "md_interleave": true, 00:16:43.394 "dif_type": 0, 00:16:43.394 "assigned_rate_limits": { 00:16:43.395 "rw_ios_per_sec": 0, 00:16:43.395 "rw_mbytes_per_sec": 0, 00:16:43.395 "r_mbytes_per_sec": 0, 00:16:43.395 "w_mbytes_per_sec": 0 00:16:43.395 }, 00:16:43.395 "claimed": false, 00:16:43.395 "zoned": false, 00:16:43.395 "supported_io_types": { 00:16:43.395 "read": true, 00:16:43.395 "write": true, 00:16:43.395 "unmap": false, 00:16:43.395 "flush": false, 00:16:43.395 "reset": true, 00:16:43.395 "nvme_admin": false, 00:16:43.395 "nvme_io": false, 00:16:43.395 "nvme_io_md": false, 00:16:43.395 "write_zeroes": true, 00:16:43.395 "zcopy": false, 00:16:43.395 "get_zone_info": false, 00:16:43.395 "zone_management": false, 00:16:43.395 "zone_append": false, 00:16:43.395 "compare": false, 00:16:43.395 "compare_and_write": false, 00:16:43.395 "abort": false, 00:16:43.395 "seek_hole": false, 00:16:43.395 "seek_data": false, 00:16:43.395 "copy": false, 00:16:43.395 "nvme_iov_md": false 00:16:43.395 }, 00:16:43.395 "memory_domains": [ 00:16:43.395 { 00:16:43.395 "dma_device_id": "system", 00:16:43.395 "dma_device_type": 1 00:16:43.395 }, 00:16:43.395 { 00:16:43.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.395 "dma_device_type": 2 00:16:43.395 }, 00:16:43.395 { 00:16:43.395 "dma_device_id": "system", 00:16:43.395 "dma_device_type": 1 00:16:43.395 }, 00:16:43.395 { 00:16:43.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.395 "dma_device_type": 2 00:16:43.395 } 00:16:43.395 ], 00:16:43.395 "driver_specific": { 00:16:43.395 "raid": { 00:16:43.395 "uuid": "fabc703e-1c79-4137-ac7d-a2faaa78bcbb", 00:16:43.395 "strip_size_kb": 0, 00:16:43.395 "state": "online", 00:16:43.395 "raid_level": "raid1", 00:16:43.395 "superblock": true, 00:16:43.395 "num_base_bdevs": 2, 00:16:43.395 "num_base_bdevs_discovered": 2, 00:16:43.395 "num_base_bdevs_operational": 2, 00:16:43.395 "base_bdevs_list": [ 00:16:43.395 { 00:16:43.395 "name": "BaseBdev1", 00:16:43.395 "uuid": "c7129cfb-2866-4a7b-98d4-77adc69643c7", 00:16:43.395 "is_configured": true, 00:16:43.395 "data_offset": 256, 00:16:43.395 "data_size": 7936 00:16:43.395 }, 00:16:43.395 { 00:16:43.395 "name": "BaseBdev2", 00:16:43.395 "uuid": "98d1a5e4-4701-4c51-92cd-4c8fc751813f", 00:16:43.395 "is_configured": true, 00:16:43.395 "data_offset": 256, 00:16:43.395 "data_size": 7936 00:16:43.395 } 00:16:43.395 ] 00:16:43.395 } 00:16:43.395 } 00:16:43.395 }' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:43.395 BaseBdev2' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.395 [2024-10-09 01:36:42.175672] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.395 "name": "Existed_Raid", 00:16:43.395 "uuid": "fabc703e-1c79-4137-ac7d-a2faaa78bcbb", 00:16:43.395 "strip_size_kb": 0, 00:16:43.395 "state": "online", 00:16:43.395 "raid_level": "raid1", 00:16:43.395 "superblock": true, 00:16:43.395 "num_base_bdevs": 2, 00:16:43.395 "num_base_bdevs_discovered": 1, 00:16:43.395 "num_base_bdevs_operational": 1, 00:16:43.395 "base_bdevs_list": [ 00:16:43.395 { 00:16:43.395 "name": null, 00:16:43.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.395 "is_configured": false, 00:16:43.395 "data_offset": 0, 00:16:43.395 "data_size": 7936 00:16:43.395 }, 00:16:43.395 { 00:16:43.395 "name": "BaseBdev2", 00:16:43.395 "uuid": "98d1a5e4-4701-4c51-92cd-4c8fc751813f", 00:16:43.395 "is_configured": true, 00:16:43.395 "data_offset": 256, 00:16:43.395 "data_size": 7936 00:16:43.395 } 00:16:43.395 ] 00:16:43.395 }' 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.395 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.966 [2024-10-09 01:36:42.673151] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.966 [2024-10-09 01:36:42.673260] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.966 [2024-10-09 01:36:42.694814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.966 [2024-10-09 01:36:42.694890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.966 [2024-10-09 01:36:42.694907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 99944 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99944 ']' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99944 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99944 00:16:43.966 killing process with pid 99944 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99944' 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99944 00:16:43.966 [2024-10-09 01:36:42.786168] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.966 01:36:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99944 00:16:43.966 [2024-10-09 01:36:42.787713] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:44.537 01:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:44.537 00:16:44.537 real 0m3.914s 00:16:44.537 user 0m5.882s 00:16:44.537 sys 0m0.886s 00:16:44.537 01:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.537 01:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.537 ************************************ 00:16:44.537 END TEST raid_state_function_test_sb_md_interleaved 00:16:44.537 ************************************ 00:16:44.537 01:36:43 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:44.537 01:36:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:44.537 01:36:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.537 01:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:44.537 ************************************ 00:16:44.537 START TEST raid_superblock_test_md_interleaved 00:16:44.537 ************************************ 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=100181 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 100181 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 100181 ']' 00:16:44.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.537 01:36:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:44.537 [2024-10-09 01:36:43.315204] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:44.537 [2024-10-09 01:36:43.315439] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100181 ] 00:16:44.798 [2024-10-09 01:36:43.448172] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:44.798 [2024-10-09 01:36:43.477288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.798 [2024-10-09 01:36:43.546120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.798 [2024-10-09 01:36:43.624831] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.798 [2024-10-09 01:36:43.624945] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.369 malloc1 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.369 [2024-10-09 01:36:44.165767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:45.369 [2024-10-09 01:36:44.165836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.369 [2024-10-09 01:36:44.165863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:45.369 [2024-10-09 01:36:44.165873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.369 [2024-10-09 01:36:44.168003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.369 [2024-10-09 01:36:44.168039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:45.369 pt1 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.369 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.370 malloc2 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.370 [2024-10-09 01:36:44.215120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.370 [2024-10-09 01:36:44.215336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.370 [2024-10-09 01:36:44.215419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:45.370 [2024-10-09 01:36:44.215490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.370 [2024-10-09 01:36:44.219872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.370 [2024-10-09 01:36:44.219971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.370 pt2 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.370 [2024-10-09 01:36:44.228314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:45.370 [2024-10-09 01:36:44.231221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.370 [2024-10-09 01:36:44.231464] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:45.370 [2024-10-09 01:36:44.231532] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:45.370 [2024-10-09 01:36:44.231661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:45.370 [2024-10-09 01:36:44.231776] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:45.370 [2024-10-09 01:36:44.231833] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:45.370 [2024-10-09 01:36:44.231952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.370 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.630 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.630 "name": "raid_bdev1", 00:16:45.630 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:45.630 "strip_size_kb": 0, 00:16:45.630 "state": "online", 00:16:45.630 "raid_level": "raid1", 00:16:45.630 "superblock": true, 00:16:45.630 "num_base_bdevs": 2, 00:16:45.630 "num_base_bdevs_discovered": 2, 00:16:45.630 "num_base_bdevs_operational": 2, 00:16:45.630 "base_bdevs_list": [ 00:16:45.630 { 00:16:45.630 "name": "pt1", 00:16:45.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:45.630 "is_configured": true, 00:16:45.630 "data_offset": 256, 00:16:45.630 "data_size": 7936 00:16:45.630 }, 00:16:45.630 { 00:16:45.630 "name": "pt2", 00:16:45.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.630 "is_configured": true, 00:16:45.630 "data_offset": 256, 00:16:45.630 "data_size": 7936 00:16:45.630 } 00:16:45.630 ] 00:16:45.630 }' 00:16:45.630 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.630 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:45.891 [2024-10-09 01:36:44.704615] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:45.891 "name": "raid_bdev1", 00:16:45.891 "aliases": [ 00:16:45.891 "a391549a-59dc-4f66-a747-1804610d85ac" 00:16:45.891 ], 00:16:45.891 "product_name": "Raid Volume", 00:16:45.891 "block_size": 4128, 00:16:45.891 "num_blocks": 7936, 00:16:45.891 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:45.891 "md_size": 32, 00:16:45.891 "md_interleave": true, 00:16:45.891 "dif_type": 0, 00:16:45.891 "assigned_rate_limits": { 00:16:45.891 "rw_ios_per_sec": 0, 00:16:45.891 "rw_mbytes_per_sec": 0, 00:16:45.891 "r_mbytes_per_sec": 0, 00:16:45.891 "w_mbytes_per_sec": 0 00:16:45.891 }, 00:16:45.891 "claimed": false, 00:16:45.891 "zoned": false, 00:16:45.891 "supported_io_types": { 00:16:45.891 "read": true, 00:16:45.891 "write": true, 00:16:45.891 "unmap": false, 00:16:45.891 "flush": false, 00:16:45.891 "reset": true, 00:16:45.891 "nvme_admin": false, 00:16:45.891 "nvme_io": false, 00:16:45.891 "nvme_io_md": false, 00:16:45.891 "write_zeroes": true, 00:16:45.891 "zcopy": false, 00:16:45.891 "get_zone_info": false, 00:16:45.891 "zone_management": false, 00:16:45.891 "zone_append": false, 00:16:45.891 "compare": false, 00:16:45.891 "compare_and_write": false, 00:16:45.891 "abort": false, 00:16:45.891 "seek_hole": false, 00:16:45.891 "seek_data": false, 00:16:45.891 "copy": false, 00:16:45.891 "nvme_iov_md": false 00:16:45.891 }, 00:16:45.891 "memory_domains": [ 00:16:45.891 { 00:16:45.891 "dma_device_id": "system", 00:16:45.891 "dma_device_type": 1 00:16:45.891 }, 00:16:45.891 { 00:16:45.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.891 "dma_device_type": 2 00:16:45.891 }, 00:16:45.891 { 00:16:45.891 "dma_device_id": "system", 00:16:45.891 "dma_device_type": 1 00:16:45.891 }, 00:16:45.891 { 00:16:45.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.891 "dma_device_type": 2 00:16:45.891 } 00:16:45.891 ], 00:16:45.891 "driver_specific": { 00:16:45.891 "raid": { 00:16:45.891 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:45.891 "strip_size_kb": 0, 00:16:45.891 "state": "online", 00:16:45.891 "raid_level": "raid1", 00:16:45.891 "superblock": true, 00:16:45.891 "num_base_bdevs": 2, 00:16:45.891 "num_base_bdevs_discovered": 2, 00:16:45.891 "num_base_bdevs_operational": 2, 00:16:45.891 "base_bdevs_list": [ 00:16:45.891 { 00:16:45.891 "name": "pt1", 00:16:45.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:45.891 "is_configured": true, 00:16:45.891 "data_offset": 256, 00:16:45.891 "data_size": 7936 00:16:45.891 }, 00:16:45.891 { 00:16:45.891 "name": "pt2", 00:16:45.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.891 "is_configured": true, 00:16:45.891 "data_offset": 256, 00:16:45.891 "data_size": 7936 00:16:45.891 } 00:16:45.891 ] 00:16:45.891 } 00:16:45.891 } 00:16:45.891 }' 00:16:45.891 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:46.152 pt2' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:46.152 [2024-10-09 01:36:44.940557] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a391549a-59dc-4f66-a747-1804610d85ac 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z a391549a-59dc-4f66-a747-1804610d85ac ']' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 [2024-10-09 01:36:44.988334] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.152 [2024-10-09 01:36:44.988402] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.152 [2024-10-09 01:36:44.988495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.152 [2024-10-09 01:36:44.988578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.152 [2024-10-09 01:36:44.988592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.152 01:36:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.152 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:46.152 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:46.152 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.152 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:46.152 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.413 [2024-10-09 01:36:45.128372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:46.413 [2024-10-09 01:36:45.130429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:46.413 [2024-10-09 01:36:45.130489] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:46.413 [2024-10-09 01:36:45.130538] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:46.413 [2024-10-09 01:36:45.130553] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.413 [2024-10-09 01:36:45.130562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:46.413 request: 00:16:46.413 { 00:16:46.413 "name": "raid_bdev1", 00:16:46.413 "raid_level": "raid1", 00:16:46.413 "base_bdevs": [ 00:16:46.413 "malloc1", 00:16:46.413 "malloc2" 00:16:46.413 ], 00:16:46.413 "superblock": false, 00:16:46.413 "method": "bdev_raid_create", 00:16:46.413 "req_id": 1 00:16:46.413 } 00:16:46.413 Got JSON-RPC error response 00:16:46.413 response: 00:16:46.413 { 00:16:46.413 "code": -17, 00:16:46.413 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:46.413 } 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.413 [2024-10-09 01:36:45.192377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:46.413 [2024-10-09 01:36:45.192464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.413 [2024-10-09 01:36:45.192494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:46.413 [2024-10-09 01:36:45.192534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.413 [2024-10-09 01:36:45.194710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.413 [2024-10-09 01:36:45.194778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:46.413 [2024-10-09 01:36:45.194834] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:46.413 [2024-10-09 01:36:45.194894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:46.413 pt1 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.413 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.414 "name": "raid_bdev1", 00:16:46.414 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:46.414 "strip_size_kb": 0, 00:16:46.414 "state": "configuring", 00:16:46.414 "raid_level": "raid1", 00:16:46.414 "superblock": true, 00:16:46.414 "num_base_bdevs": 2, 00:16:46.414 "num_base_bdevs_discovered": 1, 00:16:46.414 "num_base_bdevs_operational": 2, 00:16:46.414 "base_bdevs_list": [ 00:16:46.414 { 00:16:46.414 "name": "pt1", 00:16:46.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:46.414 "is_configured": true, 00:16:46.414 "data_offset": 256, 00:16:46.414 "data_size": 7936 00:16:46.414 }, 00:16:46.414 { 00:16:46.414 "name": null, 00:16:46.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.414 "is_configured": false, 00:16:46.414 "data_offset": 256, 00:16:46.414 "data_size": 7936 00:16:46.414 } 00:16:46.414 ] 00:16:46.414 }' 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.414 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.674 [2024-10-09 01:36:45.524444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:46.674 [2024-10-09 01:36:45.524496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.674 [2024-10-09 01:36:45.524513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:46.674 [2024-10-09 01:36:45.524533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.674 [2024-10-09 01:36:45.524654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.674 [2024-10-09 01:36:45.524667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:46.674 [2024-10-09 01:36:45.524702] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:46.674 [2024-10-09 01:36:45.524721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:46.674 [2024-10-09 01:36:45.524790] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:46.674 [2024-10-09 01:36:45.524800] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:46.674 [2024-10-09 01:36:45.524886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:46.674 [2024-10-09 01:36:45.524956] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:46.674 [2024-10-09 01:36:45.524964] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:46.674 [2024-10-09 01:36:45.525013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.674 pt2 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:46.674 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.934 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.934 "name": "raid_bdev1", 00:16:46.934 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:46.934 "strip_size_kb": 0, 00:16:46.934 "state": "online", 00:16:46.934 "raid_level": "raid1", 00:16:46.934 "superblock": true, 00:16:46.934 "num_base_bdevs": 2, 00:16:46.934 "num_base_bdevs_discovered": 2, 00:16:46.934 "num_base_bdevs_operational": 2, 00:16:46.934 "base_bdevs_list": [ 00:16:46.934 { 00:16:46.934 "name": "pt1", 00:16:46.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:46.934 "is_configured": true, 00:16:46.934 "data_offset": 256, 00:16:46.934 "data_size": 7936 00:16:46.934 }, 00:16:46.934 { 00:16:46.934 "name": "pt2", 00:16:46.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.934 "is_configured": true, 00:16:46.934 "data_offset": 256, 00:16:46.934 "data_size": 7936 00:16:46.934 } 00:16:46.934 ] 00:16:46.934 }' 00:16:46.934 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.934 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.197 01:36:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.197 [2024-10-09 01:36:45.992824] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.197 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.197 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.197 "name": "raid_bdev1", 00:16:47.197 "aliases": [ 00:16:47.197 "a391549a-59dc-4f66-a747-1804610d85ac" 00:16:47.197 ], 00:16:47.197 "product_name": "Raid Volume", 00:16:47.197 "block_size": 4128, 00:16:47.197 "num_blocks": 7936, 00:16:47.197 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:47.197 "md_size": 32, 00:16:47.197 "md_interleave": true, 00:16:47.197 "dif_type": 0, 00:16:47.197 "assigned_rate_limits": { 00:16:47.197 "rw_ios_per_sec": 0, 00:16:47.197 "rw_mbytes_per_sec": 0, 00:16:47.197 "r_mbytes_per_sec": 0, 00:16:47.197 "w_mbytes_per_sec": 0 00:16:47.197 }, 00:16:47.197 "claimed": false, 00:16:47.197 "zoned": false, 00:16:47.197 "supported_io_types": { 00:16:47.197 "read": true, 00:16:47.197 "write": true, 00:16:47.197 "unmap": false, 00:16:47.197 "flush": false, 00:16:47.197 "reset": true, 00:16:47.197 "nvme_admin": false, 00:16:47.197 "nvme_io": false, 00:16:47.197 "nvme_io_md": false, 00:16:47.197 "write_zeroes": true, 00:16:47.197 "zcopy": false, 00:16:47.197 "get_zone_info": false, 00:16:47.197 "zone_management": false, 00:16:47.197 "zone_append": false, 00:16:47.197 "compare": false, 00:16:47.197 "compare_and_write": false, 00:16:47.197 "abort": false, 00:16:47.197 "seek_hole": false, 00:16:47.197 "seek_data": false, 00:16:47.197 "copy": false, 00:16:47.197 "nvme_iov_md": false 00:16:47.197 }, 00:16:47.197 "memory_domains": [ 00:16:47.197 { 00:16:47.197 "dma_device_id": "system", 00:16:47.197 "dma_device_type": 1 00:16:47.197 }, 00:16:47.197 { 00:16:47.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.197 "dma_device_type": 2 00:16:47.197 }, 00:16:47.197 { 00:16:47.197 "dma_device_id": "system", 00:16:47.197 "dma_device_type": 1 00:16:47.197 }, 00:16:47.197 { 00:16:47.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:47.197 "dma_device_type": 2 00:16:47.197 } 00:16:47.197 ], 00:16:47.197 "driver_specific": { 00:16:47.197 "raid": { 00:16:47.197 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:47.197 "strip_size_kb": 0, 00:16:47.197 "state": "online", 00:16:47.197 "raid_level": "raid1", 00:16:47.197 "superblock": true, 00:16:47.197 "num_base_bdevs": 2, 00:16:47.197 "num_base_bdevs_discovered": 2, 00:16:47.197 "num_base_bdevs_operational": 2, 00:16:47.197 "base_bdevs_list": [ 00:16:47.197 { 00:16:47.197 "name": "pt1", 00:16:47.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.197 "is_configured": true, 00:16:47.197 "data_offset": 256, 00:16:47.197 "data_size": 7936 00:16:47.197 }, 00:16:47.197 { 00:16:47.197 "name": "pt2", 00:16:47.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.197 "is_configured": true, 00:16:47.197 "data_offset": 256, 00:16:47.197 "data_size": 7936 00:16:47.197 } 00:16:47.197 ] 00:16:47.197 } 00:16:47.197 } 00:16:47.197 }' 00:16:47.197 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.197 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:47.197 pt2' 00:16:47.197 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.457 [2024-10-09 01:36:46.220861] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' a391549a-59dc-4f66-a747-1804610d85ac '!=' a391549a-59dc-4f66-a747-1804610d85ac ']' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.457 [2024-10-09 01:36:46.268668] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.457 "name": "raid_bdev1", 00:16:47.457 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:47.457 "strip_size_kb": 0, 00:16:47.457 "state": "online", 00:16:47.457 "raid_level": "raid1", 00:16:47.457 "superblock": true, 00:16:47.457 "num_base_bdevs": 2, 00:16:47.457 "num_base_bdevs_discovered": 1, 00:16:47.457 "num_base_bdevs_operational": 1, 00:16:47.457 "base_bdevs_list": [ 00:16:47.457 { 00:16:47.457 "name": null, 00:16:47.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.457 "is_configured": false, 00:16:47.457 "data_offset": 0, 00:16:47.457 "data_size": 7936 00:16:47.457 }, 00:16:47.457 { 00:16:47.457 "name": "pt2", 00:16:47.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.457 "is_configured": true, 00:16:47.457 "data_offset": 256, 00:16:47.457 "data_size": 7936 00:16:47.457 } 00:16:47.457 ] 00:16:47.457 }' 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.457 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.028 [2024-10-09 01:36:46.656741] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.028 [2024-10-09 01:36:46.656833] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.028 [2024-10-09 01:36:46.656906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.028 [2024-10-09 01:36:46.656961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.028 [2024-10-09 01:36:46.656995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.028 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.029 [2024-10-09 01:36:46.728764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.029 [2024-10-09 01:36:46.728837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.029 [2024-10-09 01:36:46.728852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:48.029 [2024-10-09 01:36:46.728864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.029 [2024-10-09 01:36:46.731006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.029 [2024-10-09 01:36:46.731044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.029 [2024-10-09 01:36:46.731083] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:48.029 [2024-10-09 01:36:46.731112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.029 [2024-10-09 01:36:46.731160] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:48.029 [2024-10-09 01:36:46.731169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:48.029 [2024-10-09 01:36:46.731248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:48.029 [2024-10-09 01:36:46.731307] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:48.029 [2024-10-09 01:36:46.731314] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:48.029 [2024-10-09 01:36:46.731370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.029 pt2 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.029 "name": "raid_bdev1", 00:16:48.029 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:48.029 "strip_size_kb": 0, 00:16:48.029 "state": "online", 00:16:48.029 "raid_level": "raid1", 00:16:48.029 "superblock": true, 00:16:48.029 "num_base_bdevs": 2, 00:16:48.029 "num_base_bdevs_discovered": 1, 00:16:48.029 "num_base_bdevs_operational": 1, 00:16:48.029 "base_bdevs_list": [ 00:16:48.029 { 00:16:48.029 "name": null, 00:16:48.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.029 "is_configured": false, 00:16:48.029 "data_offset": 256, 00:16:48.029 "data_size": 7936 00:16:48.029 }, 00:16:48.029 { 00:16:48.029 "name": "pt2", 00:16:48.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.029 "is_configured": true, 00:16:48.029 "data_offset": 256, 00:16:48.029 "data_size": 7936 00:16:48.029 } 00:16:48.029 ] 00:16:48.029 }' 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.029 01:36:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.289 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.289 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.289 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.289 [2024-10-09 01:36:47.168870] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.289 [2024-10-09 01:36:47.168946] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.289 [2024-10-09 01:36:47.169013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.289 [2024-10-09 01:36:47.169069] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.289 [2024-10-09 01:36:47.169100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:48.289 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.289 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:48.289 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.289 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.289 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.550 [2024-10-09 01:36:47.216917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.550 [2024-10-09 01:36:47.217001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.550 [2024-10-09 01:36:47.217040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:48.550 [2024-10-09 01:36:47.217066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.550 [2024-10-09 01:36:47.219157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.550 [2024-10-09 01:36:47.219220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.550 [2024-10-09 01:36:47.219282] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:48.550 [2024-10-09 01:36:47.219331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.550 [2024-10-09 01:36:47.219442] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:48.550 [2024-10-09 01:36:47.219492] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.550 [2024-10-09 01:36:47.219537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:48.550 [2024-10-09 01:36:47.219603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.550 [2024-10-09 01:36:47.219691] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:48.550 [2024-10-09 01:36:47.219732] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:48.550 [2024-10-09 01:36:47.219806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:48.550 [2024-10-09 01:36:47.219893] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:48.550 [2024-10-09 01:36:47.219934] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:48.550 [2024-10-09 01:36:47.220021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.550 pt1 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.550 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.551 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.551 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.551 "name": "raid_bdev1", 00:16:48.551 "uuid": "a391549a-59dc-4f66-a747-1804610d85ac", 00:16:48.551 "strip_size_kb": 0, 00:16:48.551 "state": "online", 00:16:48.551 "raid_level": "raid1", 00:16:48.551 "superblock": true, 00:16:48.551 "num_base_bdevs": 2, 00:16:48.551 "num_base_bdevs_discovered": 1, 00:16:48.551 "num_base_bdevs_operational": 1, 00:16:48.551 "base_bdevs_list": [ 00:16:48.551 { 00:16:48.551 "name": null, 00:16:48.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.551 "is_configured": false, 00:16:48.551 "data_offset": 256, 00:16:48.551 "data_size": 7936 00:16:48.551 }, 00:16:48.551 { 00:16:48.551 "name": "pt2", 00:16:48.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.551 "is_configured": true, 00:16:48.551 "data_offset": 256, 00:16:48.551 "data_size": 7936 00:16:48.551 } 00:16:48.551 ] 00:16:48.551 }' 00:16:48.551 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.551 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.811 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:48.811 [2024-10-09 01:36:47.689236] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' a391549a-59dc-4f66-a747-1804610d85ac '!=' a391549a-59dc-4f66-a747-1804610d85ac ']' 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 100181 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 100181 ']' 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 100181 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100181 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100181' 00:16:49.079 killing process with pid 100181 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 100181 00:16:49.079 [2024-10-09 01:36:47.764938] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.079 01:36:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 100181 00:16:49.079 [2024-10-09 01:36:47.765058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.079 [2024-10-09 01:36:47.765128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.079 [2024-10-09 01:36:47.765177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:49.079 [2024-10-09 01:36:47.807921] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.382 01:36:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:49.382 ************************************ 00:16:49.382 END TEST raid_superblock_test_md_interleaved 00:16:49.382 ************************************ 00:16:49.382 00:16:49.382 real 0m4.950s 00:16:49.382 user 0m7.845s 00:16:49.382 sys 0m1.104s 00:16:49.382 01:36:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.382 01:36:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.382 01:36:48 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:49.382 01:36:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:49.382 01:36:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.382 01:36:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.382 ************************************ 00:16:49.382 START TEST raid_rebuild_test_sb_md_interleaved 00:16:49.382 ************************************ 00:16:49.382 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.383 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=100498 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 100498 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 100498 ']' 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.643 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.644 01:36:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:49.644 [2024-10-09 01:36:48.363056] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:49.644 [2024-10-09 01:36:48.363241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:49.644 Zero copy mechanism will not be used. 00:16:49.644 -allocations --file-prefix=spdk_pid100498 ] 00:16:49.644 [2024-10-09 01:36:48.494894] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:49.644 [2024-10-09 01:36:48.523812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.904 [2024-10-09 01:36:48.594265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.904 [2024-10-09 01:36:48.672653] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.904 [2024-10-09 01:36:48.672693] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.475 BaseBdev1_malloc 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.475 [2024-10-09 01:36:49.226063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:50.475 [2024-10-09 01:36:49.226155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.475 [2024-10-09 01:36:49.226185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.475 [2024-10-09 01:36:49.226207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.475 [2024-10-09 01:36:49.228279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.475 [2024-10-09 01:36:49.228315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:50.475 BaseBdev1 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.475 BaseBdev2_malloc 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.475 [2024-10-09 01:36:49.277858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:50.475 [2024-10-09 01:36:49.277971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.475 [2024-10-09 01:36:49.278017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:50.475 [2024-10-09 01:36:49.278044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.475 [2024-10-09 01:36:49.282115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.475 [2024-10-09 01:36:49.282166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:50.475 BaseBdev2 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.475 spare_malloc 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.475 spare_delay 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.475 [2024-10-09 01:36:49.327095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.475 [2024-10-09 01:36:49.327151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.475 [2024-10-09 01:36:49.327170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:50.475 [2024-10-09 01:36:49.327181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.475 [2024-10-09 01:36:49.329280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.475 [2024-10-09 01:36:49.329396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.475 spare 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.475 [2024-10-09 01:36:49.339149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.475 [2024-10-09 01:36:49.341224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.475 [2024-10-09 01:36:49.341392] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.475 [2024-10-09 01:36:49.341408] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:50.475 [2024-10-09 01:36:49.341495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:50.475 [2024-10-09 01:36:49.341573] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.475 [2024-10-09 01:36:49.341582] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.475 [2024-10-09 01:36:49.341662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.475 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.476 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.736 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.736 "name": "raid_bdev1", 00:16:50.736 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:50.736 "strip_size_kb": 0, 00:16:50.736 "state": "online", 00:16:50.736 "raid_level": "raid1", 00:16:50.736 "superblock": true, 00:16:50.736 "num_base_bdevs": 2, 00:16:50.736 "num_base_bdevs_discovered": 2, 00:16:50.736 "num_base_bdevs_operational": 2, 00:16:50.736 "base_bdevs_list": [ 00:16:50.736 { 00:16:50.736 "name": "BaseBdev1", 00:16:50.736 "uuid": "d8b14c09-78c5-577d-8db4-6bca3add6057", 00:16:50.736 "is_configured": true, 00:16:50.736 "data_offset": 256, 00:16:50.736 "data_size": 7936 00:16:50.736 }, 00:16:50.736 { 00:16:50.736 "name": "BaseBdev2", 00:16:50.736 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:50.736 "is_configured": true, 00:16:50.736 "data_offset": 256, 00:16:50.736 "data_size": 7936 00:16:50.736 } 00:16:50.736 ] 00:16:50.736 }' 00:16:50.736 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.736 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 [2024-10-09 01:36:49.771504] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:50.996 [2024-10-09 01:36:49.867239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.996 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.256 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.256 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.256 "name": "raid_bdev1", 00:16:51.256 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:51.256 "strip_size_kb": 0, 00:16:51.256 "state": "online", 00:16:51.256 "raid_level": "raid1", 00:16:51.256 "superblock": true, 00:16:51.256 "num_base_bdevs": 2, 00:16:51.256 "num_base_bdevs_discovered": 1, 00:16:51.256 "num_base_bdevs_operational": 1, 00:16:51.256 "base_bdevs_list": [ 00:16:51.256 { 00:16:51.256 "name": null, 00:16:51.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.256 "is_configured": false, 00:16:51.256 "data_offset": 0, 00:16:51.256 "data_size": 7936 00:16:51.256 }, 00:16:51.256 { 00:16:51.256 "name": "BaseBdev2", 00:16:51.256 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:51.256 "is_configured": true, 00:16:51.256 "data_offset": 256, 00:16:51.256 "data_size": 7936 00:16:51.256 } 00:16:51.256 ] 00:16:51.256 }' 00:16:51.256 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.256 01:36:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.516 01:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.516 01:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.516 01:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:51.516 [2024-10-09 01:36:50.267347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.516 [2024-10-09 01:36:50.272394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:51.516 01:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.516 01:36:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:51.516 [2024-10-09 01:36:50.274582] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.455 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.455 "name": "raid_bdev1", 00:16:52.455 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:52.455 "strip_size_kb": 0, 00:16:52.455 "state": "online", 00:16:52.455 "raid_level": "raid1", 00:16:52.455 "superblock": true, 00:16:52.456 "num_base_bdevs": 2, 00:16:52.456 "num_base_bdevs_discovered": 2, 00:16:52.456 "num_base_bdevs_operational": 2, 00:16:52.456 "process": { 00:16:52.456 "type": "rebuild", 00:16:52.456 "target": "spare", 00:16:52.456 "progress": { 00:16:52.456 "blocks": 2560, 00:16:52.456 "percent": 32 00:16:52.456 } 00:16:52.456 }, 00:16:52.456 "base_bdevs_list": [ 00:16:52.456 { 00:16:52.456 "name": "spare", 00:16:52.456 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:52.456 "is_configured": true, 00:16:52.456 "data_offset": 256, 00:16:52.456 "data_size": 7936 00:16:52.456 }, 00:16:52.456 { 00:16:52.456 "name": "BaseBdev2", 00:16:52.456 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:52.456 "is_configured": true, 00:16:52.456 "data_offset": 256, 00:16:52.456 "data_size": 7936 00:16:52.456 } 00:16:52.456 ] 00:16:52.456 }' 00:16:52.456 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.716 [2024-10-09 01:36:51.428418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.716 [2024-10-09 01:36:51.485230] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:52.716 [2024-10-09 01:36:51.485293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.716 [2024-10-09 01:36:51.485308] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.716 [2024-10-09 01:36:51.485322] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.716 "name": "raid_bdev1", 00:16:52.716 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:52.716 "strip_size_kb": 0, 00:16:52.716 "state": "online", 00:16:52.716 "raid_level": "raid1", 00:16:52.716 "superblock": true, 00:16:52.716 "num_base_bdevs": 2, 00:16:52.716 "num_base_bdevs_discovered": 1, 00:16:52.716 "num_base_bdevs_operational": 1, 00:16:52.716 "base_bdevs_list": [ 00:16:52.716 { 00:16:52.716 "name": null, 00:16:52.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.716 "is_configured": false, 00:16:52.716 "data_offset": 0, 00:16:52.716 "data_size": 7936 00:16:52.716 }, 00:16:52.716 { 00:16:52.716 "name": "BaseBdev2", 00:16:52.716 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:52.716 "is_configured": true, 00:16:52.716 "data_offset": 256, 00:16:52.716 "data_size": 7936 00:16:52.716 } 00:16:52.716 ] 00:16:52.716 }' 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.716 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.287 "name": "raid_bdev1", 00:16:53.287 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:53.287 "strip_size_kb": 0, 00:16:53.287 "state": "online", 00:16:53.287 "raid_level": "raid1", 00:16:53.287 "superblock": true, 00:16:53.287 "num_base_bdevs": 2, 00:16:53.287 "num_base_bdevs_discovered": 1, 00:16:53.287 "num_base_bdevs_operational": 1, 00:16:53.287 "base_bdevs_list": [ 00:16:53.287 { 00:16:53.287 "name": null, 00:16:53.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.287 "is_configured": false, 00:16:53.287 "data_offset": 0, 00:16:53.287 "data_size": 7936 00:16:53.287 }, 00:16:53.287 { 00:16:53.287 "name": "BaseBdev2", 00:16:53.287 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:53.287 "is_configured": true, 00:16:53.287 "data_offset": 256, 00:16:53.287 "data_size": 7936 00:16:53.287 } 00:16:53.287 ] 00:16:53.287 }' 00:16:53.287 01:36:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.287 01:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.287 01:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.287 01:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.287 01:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.287 01:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.287 01:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.287 [2024-10-09 01:36:52.087567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.287 [2024-10-09 01:36:52.091593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:53.287 01:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.287 01:36:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:53.287 [2024-10-09 01:36:52.093768] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.228 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.489 "name": "raid_bdev1", 00:16:54.489 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:54.489 "strip_size_kb": 0, 00:16:54.489 "state": "online", 00:16:54.489 "raid_level": "raid1", 00:16:54.489 "superblock": true, 00:16:54.489 "num_base_bdevs": 2, 00:16:54.489 "num_base_bdevs_discovered": 2, 00:16:54.489 "num_base_bdevs_operational": 2, 00:16:54.489 "process": { 00:16:54.489 "type": "rebuild", 00:16:54.489 "target": "spare", 00:16:54.489 "progress": { 00:16:54.489 "blocks": 2560, 00:16:54.489 "percent": 32 00:16:54.489 } 00:16:54.489 }, 00:16:54.489 "base_bdevs_list": [ 00:16:54.489 { 00:16:54.489 "name": "spare", 00:16:54.489 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:54.489 "is_configured": true, 00:16:54.489 "data_offset": 256, 00:16:54.489 "data_size": 7936 00:16:54.489 }, 00:16:54.489 { 00:16:54.489 "name": "BaseBdev2", 00:16:54.489 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:54.489 "is_configured": true, 00:16:54.489 "data_offset": 256, 00:16:54.489 "data_size": 7936 00:16:54.489 } 00:16:54.489 ] 00:16:54.489 }' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:54.489 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=629 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.489 "name": "raid_bdev1", 00:16:54.489 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:54.489 "strip_size_kb": 0, 00:16:54.489 "state": "online", 00:16:54.489 "raid_level": "raid1", 00:16:54.489 "superblock": true, 00:16:54.489 "num_base_bdevs": 2, 00:16:54.489 "num_base_bdevs_discovered": 2, 00:16:54.489 "num_base_bdevs_operational": 2, 00:16:54.489 "process": { 00:16:54.489 "type": "rebuild", 00:16:54.489 "target": "spare", 00:16:54.489 "progress": { 00:16:54.489 "blocks": 2816, 00:16:54.489 "percent": 35 00:16:54.489 } 00:16:54.489 }, 00:16:54.489 "base_bdevs_list": [ 00:16:54.489 { 00:16:54.489 "name": "spare", 00:16:54.489 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:54.489 "is_configured": true, 00:16:54.489 "data_offset": 256, 00:16:54.489 "data_size": 7936 00:16:54.489 }, 00:16:54.489 { 00:16:54.489 "name": "BaseBdev2", 00:16:54.489 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:54.489 "is_configured": true, 00:16:54.489 "data_offset": 256, 00:16:54.489 "data_size": 7936 00:16:54.489 } 00:16:54.489 ] 00:16:54.489 }' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.489 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.749 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.749 01:36:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.690 "name": "raid_bdev1", 00:16:55.690 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:55.690 "strip_size_kb": 0, 00:16:55.690 "state": "online", 00:16:55.690 "raid_level": "raid1", 00:16:55.690 "superblock": true, 00:16:55.690 "num_base_bdevs": 2, 00:16:55.690 "num_base_bdevs_discovered": 2, 00:16:55.690 "num_base_bdevs_operational": 2, 00:16:55.690 "process": { 00:16:55.690 "type": "rebuild", 00:16:55.690 "target": "spare", 00:16:55.690 "progress": { 00:16:55.690 "blocks": 5632, 00:16:55.690 "percent": 70 00:16:55.690 } 00:16:55.690 }, 00:16:55.690 "base_bdevs_list": [ 00:16:55.690 { 00:16:55.690 "name": "spare", 00:16:55.690 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:55.690 "is_configured": true, 00:16:55.690 "data_offset": 256, 00:16:55.690 "data_size": 7936 00:16:55.690 }, 00:16:55.690 { 00:16:55.690 "name": "BaseBdev2", 00:16:55.690 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:55.690 "is_configured": true, 00:16:55.690 "data_offset": 256, 00:16:55.690 "data_size": 7936 00:16:55.690 } 00:16:55.690 ] 00:16:55.690 }' 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.690 01:36:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.631 [2024-10-09 01:36:55.219292] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:56.631 [2024-10-09 01:36:55.219450] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:56.631 [2024-10-09 01:36:55.219571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.891 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.891 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.891 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.891 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.892 "name": "raid_bdev1", 00:16:56.892 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:56.892 "strip_size_kb": 0, 00:16:56.892 "state": "online", 00:16:56.892 "raid_level": "raid1", 00:16:56.892 "superblock": true, 00:16:56.892 "num_base_bdevs": 2, 00:16:56.892 "num_base_bdevs_discovered": 2, 00:16:56.892 "num_base_bdevs_operational": 2, 00:16:56.892 "base_bdevs_list": [ 00:16:56.892 { 00:16:56.892 "name": "spare", 00:16:56.892 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:56.892 "is_configured": true, 00:16:56.892 "data_offset": 256, 00:16:56.892 "data_size": 7936 00:16:56.892 }, 00:16:56.892 { 00:16:56.892 "name": "BaseBdev2", 00:16:56.892 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:56.892 "is_configured": true, 00:16:56.892 "data_offset": 256, 00:16:56.892 "data_size": 7936 00:16:56.892 } 00:16:56.892 ] 00:16:56.892 }' 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.892 "name": "raid_bdev1", 00:16:56.892 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:56.892 "strip_size_kb": 0, 00:16:56.892 "state": "online", 00:16:56.892 "raid_level": "raid1", 00:16:56.892 "superblock": true, 00:16:56.892 "num_base_bdevs": 2, 00:16:56.892 "num_base_bdevs_discovered": 2, 00:16:56.892 "num_base_bdevs_operational": 2, 00:16:56.892 "base_bdevs_list": [ 00:16:56.892 { 00:16:56.892 "name": "spare", 00:16:56.892 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:56.892 "is_configured": true, 00:16:56.892 "data_offset": 256, 00:16:56.892 "data_size": 7936 00:16:56.892 }, 00:16:56.892 { 00:16:56.892 "name": "BaseBdev2", 00:16:56.892 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:56.892 "is_configured": true, 00:16:56.892 "data_offset": 256, 00:16:56.892 "data_size": 7936 00:16:56.892 } 00:16:56.892 ] 00:16:56.892 }' 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.892 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.151 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.151 "name": "raid_bdev1", 00:16:57.151 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:57.151 "strip_size_kb": 0, 00:16:57.151 "state": "online", 00:16:57.151 "raid_level": "raid1", 00:16:57.151 "superblock": true, 00:16:57.151 "num_base_bdevs": 2, 00:16:57.151 "num_base_bdevs_discovered": 2, 00:16:57.151 "num_base_bdevs_operational": 2, 00:16:57.151 "base_bdevs_list": [ 00:16:57.151 { 00:16:57.151 "name": "spare", 00:16:57.151 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:57.151 "is_configured": true, 00:16:57.151 "data_offset": 256, 00:16:57.151 "data_size": 7936 00:16:57.151 }, 00:16:57.151 { 00:16:57.151 "name": "BaseBdev2", 00:16:57.151 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:57.151 "is_configured": true, 00:16:57.152 "data_offset": 256, 00:16:57.152 "data_size": 7936 00:16:57.152 } 00:16:57.152 ] 00:16:57.152 }' 00:16:57.152 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.152 01:36:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.411 [2024-10-09 01:36:56.229244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.411 [2024-10-09 01:36:56.229349] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.411 [2024-10-09 01:36:56.229468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.411 [2024-10-09 01:36:56.229583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.411 [2024-10-09 01:36:56.229632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:57.411 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:57.412 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:57.412 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:57.412 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.412 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.412 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.412 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:57.412 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.412 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.412 [2024-10-09 01:36:56.301276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:57.412 [2024-10-09 01:36:56.301332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.412 [2024-10-09 01:36:56.301359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:57.412 [2024-10-09 01:36:56.301369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.412 [2024-10-09 01:36:56.303638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.672 [2024-10-09 01:36:56.303713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:57.672 [2024-10-09 01:36:56.303773] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:57.672 [2024-10-09 01:36:56.303826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.672 [2024-10-09 01:36:56.303928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.672 spare 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.672 [2024-10-09 01:36:56.403988] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:57.672 [2024-10-09 01:36:56.404017] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:57.672 [2024-10-09 01:36:56.404108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:57.672 [2024-10-09 01:36:56.404180] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:57.672 [2024-10-09 01:36:56.404192] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:57.672 [2024-10-09 01:36:56.404263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.672 "name": "raid_bdev1", 00:16:57.672 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:57.672 "strip_size_kb": 0, 00:16:57.672 "state": "online", 00:16:57.672 "raid_level": "raid1", 00:16:57.672 "superblock": true, 00:16:57.672 "num_base_bdevs": 2, 00:16:57.672 "num_base_bdevs_discovered": 2, 00:16:57.672 "num_base_bdevs_operational": 2, 00:16:57.672 "base_bdevs_list": [ 00:16:57.672 { 00:16:57.672 "name": "spare", 00:16:57.672 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:57.672 "is_configured": true, 00:16:57.672 "data_offset": 256, 00:16:57.672 "data_size": 7936 00:16:57.672 }, 00:16:57.672 { 00:16:57.672 "name": "BaseBdev2", 00:16:57.672 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:57.672 "is_configured": true, 00:16:57.672 "data_offset": 256, 00:16:57.672 "data_size": 7936 00:16:57.672 } 00:16:57.672 ] 00:16:57.672 }' 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.672 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.049 "name": "raid_bdev1", 00:16:58.049 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:58.049 "strip_size_kb": 0, 00:16:58.049 "state": "online", 00:16:58.049 "raid_level": "raid1", 00:16:58.049 "superblock": true, 00:16:58.049 "num_base_bdevs": 2, 00:16:58.049 "num_base_bdevs_discovered": 2, 00:16:58.049 "num_base_bdevs_operational": 2, 00:16:58.049 "base_bdevs_list": [ 00:16:58.049 { 00:16:58.049 "name": "spare", 00:16:58.049 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:58.049 "is_configured": true, 00:16:58.049 "data_offset": 256, 00:16:58.049 "data_size": 7936 00:16:58.049 }, 00:16:58.049 { 00:16:58.049 "name": "BaseBdev2", 00:16:58.049 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:58.049 "is_configured": true, 00:16:58.049 "data_offset": 256, 00:16:58.049 "data_size": 7936 00:16:58.049 } 00:16:58.049 ] 00:16:58.049 }' 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.049 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.320 01:36:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.320 [2024-10-09 01:36:57.001563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.320 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.320 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:58.320 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.320 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.320 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.320 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.320 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.321 "name": "raid_bdev1", 00:16:58.321 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:58.321 "strip_size_kb": 0, 00:16:58.321 "state": "online", 00:16:58.321 "raid_level": "raid1", 00:16:58.321 "superblock": true, 00:16:58.321 "num_base_bdevs": 2, 00:16:58.321 "num_base_bdevs_discovered": 1, 00:16:58.321 "num_base_bdevs_operational": 1, 00:16:58.321 "base_bdevs_list": [ 00:16:58.321 { 00:16:58.321 "name": null, 00:16:58.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.321 "is_configured": false, 00:16:58.321 "data_offset": 0, 00:16:58.321 "data_size": 7936 00:16:58.321 }, 00:16:58.321 { 00:16:58.321 "name": "BaseBdev2", 00:16:58.321 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:58.321 "is_configured": true, 00:16:58.321 "data_offset": 256, 00:16:58.321 "data_size": 7936 00:16:58.321 } 00:16:58.321 ] 00:16:58.321 }' 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.321 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.581 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:58.581 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.581 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.581 [2024-10-09 01:36:57.345651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.581 [2024-10-09 01:36:57.345890] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:58.581 [2024-10-09 01:36:57.345955] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:58.581 [2024-10-09 01:36:57.346019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.581 [2024-10-09 01:36:57.351115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:58.581 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.581 01:36:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:58.581 [2024-10-09 01:36:57.353265] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.521 "name": "raid_bdev1", 00:16:59.521 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:59.521 "strip_size_kb": 0, 00:16:59.521 "state": "online", 00:16:59.521 "raid_level": "raid1", 00:16:59.521 "superblock": true, 00:16:59.521 "num_base_bdevs": 2, 00:16:59.521 "num_base_bdevs_discovered": 2, 00:16:59.521 "num_base_bdevs_operational": 2, 00:16:59.521 "process": { 00:16:59.521 "type": "rebuild", 00:16:59.521 "target": "spare", 00:16:59.521 "progress": { 00:16:59.521 "blocks": 2560, 00:16:59.521 "percent": 32 00:16:59.521 } 00:16:59.521 }, 00:16:59.521 "base_bdevs_list": [ 00:16:59.521 { 00:16:59.521 "name": "spare", 00:16:59.521 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:16:59.521 "is_configured": true, 00:16:59.521 "data_offset": 256, 00:16:59.521 "data_size": 7936 00:16:59.521 }, 00:16:59.521 { 00:16:59.521 "name": "BaseBdev2", 00:16:59.521 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:59.521 "is_configured": true, 00:16:59.521 "data_offset": 256, 00:16:59.521 "data_size": 7936 00:16:59.521 } 00:16:59.521 ] 00:16:59.521 }' 00:16:59.521 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.781 [2024-10-09 01:36:58.495469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.781 [2024-10-09 01:36:58.563202] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:59.781 [2024-10-09 01:36:58.563312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.781 [2024-10-09 01:36:58.563346] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.781 [2024-10-09 01:36:58.563373] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.781 "name": "raid_bdev1", 00:16:59.781 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:16:59.781 "strip_size_kb": 0, 00:16:59.781 "state": "online", 00:16:59.781 "raid_level": "raid1", 00:16:59.781 "superblock": true, 00:16:59.781 "num_base_bdevs": 2, 00:16:59.781 "num_base_bdevs_discovered": 1, 00:16:59.781 "num_base_bdevs_operational": 1, 00:16:59.781 "base_bdevs_list": [ 00:16:59.781 { 00:16:59.781 "name": null, 00:16:59.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.781 "is_configured": false, 00:16:59.781 "data_offset": 0, 00:16:59.781 "data_size": 7936 00:16:59.781 }, 00:16:59.781 { 00:16:59.781 "name": "BaseBdev2", 00:16:59.781 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:16:59.781 "is_configured": true, 00:16:59.781 "data_offset": 256, 00:16:59.781 "data_size": 7936 00:16:59.781 } 00:16:59.781 ] 00:16:59.781 }' 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.781 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.351 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:00.351 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.351 01:36:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.351 [2024-10-09 01:36:59.001296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:00.351 [2024-10-09 01:36:59.001399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.351 [2024-10-09 01:36:59.001442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:00.351 [2024-10-09 01:36:59.001474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.352 [2024-10-09 01:36:59.001714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.352 [2024-10-09 01:36:59.001769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:00.352 [2024-10-09 01:36:59.001850] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:00.352 [2024-10-09 01:36:59.001890] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:00.352 [2024-10-09 01:36:59.001929] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:00.352 [2024-10-09 01:36:59.002009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.352 [2024-10-09 01:36:59.005571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:00.352 spare 00:17:00.352 [2024-10-09 01:36:59.007681] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:00.352 01:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.352 01:36:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.292 "name": "raid_bdev1", 00:17:01.292 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:17:01.292 "strip_size_kb": 0, 00:17:01.292 "state": "online", 00:17:01.292 "raid_level": "raid1", 00:17:01.292 "superblock": true, 00:17:01.292 "num_base_bdevs": 2, 00:17:01.292 "num_base_bdevs_discovered": 2, 00:17:01.292 "num_base_bdevs_operational": 2, 00:17:01.292 "process": { 00:17:01.292 "type": "rebuild", 00:17:01.292 "target": "spare", 00:17:01.292 "progress": { 00:17:01.292 "blocks": 2560, 00:17:01.292 "percent": 32 00:17:01.292 } 00:17:01.292 }, 00:17:01.292 "base_bdevs_list": [ 00:17:01.292 { 00:17:01.292 "name": "spare", 00:17:01.292 "uuid": "5a1093d9-729e-521c-bb28-da342db4bb1e", 00:17:01.292 "is_configured": true, 00:17:01.292 "data_offset": 256, 00:17:01.292 "data_size": 7936 00:17:01.292 }, 00:17:01.292 { 00:17:01.292 "name": "BaseBdev2", 00:17:01.292 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:17:01.292 "is_configured": true, 00:17:01.292 "data_offset": 256, 00:17:01.292 "data_size": 7936 00:17:01.292 } 00:17:01.292 ] 00:17:01.292 }' 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.292 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.292 [2024-10-09 01:37:00.145476] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.552 [2024-10-09 01:37:00.217427] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:01.552 [2024-10-09 01:37:00.217486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.552 [2024-10-09 01:37:00.217505] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.552 [2024-10-09 01:37:00.217513] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.552 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.552 "name": "raid_bdev1", 00:17:01.552 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:17:01.552 "strip_size_kb": 0, 00:17:01.552 "state": "online", 00:17:01.552 "raid_level": "raid1", 00:17:01.552 "superblock": true, 00:17:01.552 "num_base_bdevs": 2, 00:17:01.552 "num_base_bdevs_discovered": 1, 00:17:01.552 "num_base_bdevs_operational": 1, 00:17:01.552 "base_bdevs_list": [ 00:17:01.552 { 00:17:01.552 "name": null, 00:17:01.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.552 "is_configured": false, 00:17:01.552 "data_offset": 0, 00:17:01.552 "data_size": 7936 00:17:01.552 }, 00:17:01.552 { 00:17:01.552 "name": "BaseBdev2", 00:17:01.552 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:17:01.552 "is_configured": true, 00:17:01.552 "data_offset": 256, 00:17:01.553 "data_size": 7936 00:17:01.553 } 00:17:01.553 ] 00:17:01.553 }' 00:17:01.553 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.553 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.813 "name": "raid_bdev1", 00:17:01.813 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:17:01.813 "strip_size_kb": 0, 00:17:01.813 "state": "online", 00:17:01.813 "raid_level": "raid1", 00:17:01.813 "superblock": true, 00:17:01.813 "num_base_bdevs": 2, 00:17:01.813 "num_base_bdevs_discovered": 1, 00:17:01.813 "num_base_bdevs_operational": 1, 00:17:01.813 "base_bdevs_list": [ 00:17:01.813 { 00:17:01.813 "name": null, 00:17:01.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.813 "is_configured": false, 00:17:01.813 "data_offset": 0, 00:17:01.813 "data_size": 7936 00:17:01.813 }, 00:17:01.813 { 00:17:01.813 "name": "BaseBdev2", 00:17:01.813 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:17:01.813 "is_configured": true, 00:17:01.813 "data_offset": 256, 00:17:01.813 "data_size": 7936 00:17:01.813 } 00:17:01.813 ] 00:17:01.813 }' 00:17:01.813 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.074 [2024-10-09 01:37:00.795270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:02.074 [2024-10-09 01:37:00.795328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.074 [2024-10-09 01:37:00.795352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:02.074 [2024-10-09 01:37:00.795361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.074 [2024-10-09 01:37:00.795551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.074 [2024-10-09 01:37:00.795563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.074 [2024-10-09 01:37:00.795629] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:02.074 [2024-10-09 01:37:00.795642] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:02.074 [2024-10-09 01:37:00.795657] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:02.074 [2024-10-09 01:37:00.795667] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:02.074 BaseBdev1 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.074 01:37:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.015 "name": "raid_bdev1", 00:17:03.015 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:17:03.015 "strip_size_kb": 0, 00:17:03.015 "state": "online", 00:17:03.015 "raid_level": "raid1", 00:17:03.015 "superblock": true, 00:17:03.015 "num_base_bdevs": 2, 00:17:03.015 "num_base_bdevs_discovered": 1, 00:17:03.015 "num_base_bdevs_operational": 1, 00:17:03.015 "base_bdevs_list": [ 00:17:03.015 { 00:17:03.015 "name": null, 00:17:03.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.015 "is_configured": false, 00:17:03.015 "data_offset": 0, 00:17:03.015 "data_size": 7936 00:17:03.015 }, 00:17:03.015 { 00:17:03.015 "name": "BaseBdev2", 00:17:03.015 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:17:03.015 "is_configured": true, 00:17:03.015 "data_offset": 256, 00:17:03.015 "data_size": 7936 00:17:03.015 } 00:17:03.015 ] 00:17:03.015 }' 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.015 01:37:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.585 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.586 "name": "raid_bdev1", 00:17:03.586 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:17:03.586 "strip_size_kb": 0, 00:17:03.586 "state": "online", 00:17:03.586 "raid_level": "raid1", 00:17:03.586 "superblock": true, 00:17:03.586 "num_base_bdevs": 2, 00:17:03.586 "num_base_bdevs_discovered": 1, 00:17:03.586 "num_base_bdevs_operational": 1, 00:17:03.586 "base_bdevs_list": [ 00:17:03.586 { 00:17:03.586 "name": null, 00:17:03.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.586 "is_configured": false, 00:17:03.586 "data_offset": 0, 00:17:03.586 "data_size": 7936 00:17:03.586 }, 00:17:03.586 { 00:17:03.586 "name": "BaseBdev2", 00:17:03.586 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:17:03.586 "is_configured": true, 00:17:03.586 "data_offset": 256, 00:17:03.586 "data_size": 7936 00:17:03.586 } 00:17:03.586 ] 00:17:03.586 }' 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.586 [2024-10-09 01:37:02.359730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.586 [2024-10-09 01:37:02.359881] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:03.586 [2024-10-09 01:37:02.359895] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:03.586 request: 00:17:03.586 { 00:17:03.586 "base_bdev": "BaseBdev1", 00:17:03.586 "raid_bdev": "raid_bdev1", 00:17:03.586 "method": "bdev_raid_add_base_bdev", 00:17:03.586 "req_id": 1 00:17:03.586 } 00:17:03.586 Got JSON-RPC error response 00:17:03.586 response: 00:17:03.586 { 00:17:03.586 "code": -22, 00:17:03.586 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:03.586 } 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:03.586 01:37:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:04.527 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.787 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.787 "name": "raid_bdev1", 00:17:04.787 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:17:04.787 "strip_size_kb": 0, 00:17:04.787 "state": "online", 00:17:04.787 "raid_level": "raid1", 00:17:04.787 "superblock": true, 00:17:04.787 "num_base_bdevs": 2, 00:17:04.787 "num_base_bdevs_discovered": 1, 00:17:04.787 "num_base_bdevs_operational": 1, 00:17:04.787 "base_bdevs_list": [ 00:17:04.787 { 00:17:04.787 "name": null, 00:17:04.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.787 "is_configured": false, 00:17:04.787 "data_offset": 0, 00:17:04.787 "data_size": 7936 00:17:04.787 }, 00:17:04.787 { 00:17:04.787 "name": "BaseBdev2", 00:17:04.787 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:17:04.787 "is_configured": true, 00:17:04.787 "data_offset": 256, 00:17:04.787 "data_size": 7936 00:17:04.787 } 00:17:04.787 ] 00:17:04.787 }' 00:17:04.787 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.787 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.047 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.047 "name": "raid_bdev1", 00:17:05.047 "uuid": "19f059fb-811c-4ab6-bb6b-48f88d4b776b", 00:17:05.047 "strip_size_kb": 0, 00:17:05.047 "state": "online", 00:17:05.047 "raid_level": "raid1", 00:17:05.047 "superblock": true, 00:17:05.047 "num_base_bdevs": 2, 00:17:05.047 "num_base_bdevs_discovered": 1, 00:17:05.047 "num_base_bdevs_operational": 1, 00:17:05.047 "base_bdevs_list": [ 00:17:05.047 { 00:17:05.047 "name": null, 00:17:05.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.047 "is_configured": false, 00:17:05.047 "data_offset": 0, 00:17:05.047 "data_size": 7936 00:17:05.047 }, 00:17:05.048 { 00:17:05.048 "name": "BaseBdev2", 00:17:05.048 "uuid": "42311485-cbe5-55dc-9cfd-a70b980e7cd5", 00:17:05.048 "is_configured": true, 00:17:05.048 "data_offset": 256, 00:17:05.048 "data_size": 7936 00:17:05.048 } 00:17:05.048 ] 00:17:05.048 }' 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 100498 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 100498 ']' 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 100498 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100498 00:17:05.048 killing process with pid 100498 00:17:05.048 Received shutdown signal, test time was about 60.000000 seconds 00:17:05.048 00:17:05.048 Latency(us) 00:17:05.048 [2024-10-09T01:37:03.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.048 [2024-10-09T01:37:03.941Z] =================================================================================================================== 00:17:05.048 [2024-10-09T01:37:03.941Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100498' 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 100498 00:17:05.048 [2024-10-09 01:37:03.925819] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.048 [2024-10-09 01:37:03.925953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.048 01:37:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 100498 00:17:05.048 [2024-10-09 01:37:03.926005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.048 [2024-10-09 01:37:03.926018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:05.308 [2024-10-09 01:37:03.987203] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.569 01:37:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:05.569 ************************************ 00:17:05.569 END TEST raid_rebuild_test_sb_md_interleaved 00:17:05.569 ************************************ 00:17:05.569 00:17:05.569 real 0m16.078s 00:17:05.569 user 0m21.243s 00:17:05.569 sys 0m1.597s 00:17:05.569 01:37:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.569 01:37:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.569 01:37:04 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:05.569 01:37:04 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:05.569 01:37:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 100498 ']' 00:17:05.569 01:37:04 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 100498 00:17:05.569 01:37:04 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:05.569 00:17:05.569 real 10m10.080s 00:17:05.569 user 14m10.386s 00:17:05.569 sys 1m57.557s 00:17:05.569 ************************************ 00:17:05.569 END TEST bdev_raid 00:17:05.569 ************************************ 00:17:05.569 01:37:04 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.569 01:37:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.829 01:37:04 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:05.829 01:37:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:05.829 01:37:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.829 01:37:04 -- common/autotest_common.sh@10 -- # set +x 00:17:05.829 ************************************ 00:17:05.829 START TEST spdkcli_raid 00:17:05.829 ************************************ 00:17:05.829 01:37:04 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:05.829 * Looking for test storage... 00:17:05.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:05.829 01:37:04 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:05.829 01:37:04 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:05.829 01:37:04 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:06.090 01:37:04 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:06.090 01:37:04 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.091 01:37:04 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.091 01:37:04 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.091 01:37:04 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:06.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.091 --rc genhtml_branch_coverage=1 00:17:06.091 --rc genhtml_function_coverage=1 00:17:06.091 --rc genhtml_legend=1 00:17:06.091 --rc geninfo_all_blocks=1 00:17:06.091 --rc geninfo_unexecuted_blocks=1 00:17:06.091 00:17:06.091 ' 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:06.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.091 --rc genhtml_branch_coverage=1 00:17:06.091 --rc genhtml_function_coverage=1 00:17:06.091 --rc genhtml_legend=1 00:17:06.091 --rc geninfo_all_blocks=1 00:17:06.091 --rc geninfo_unexecuted_blocks=1 00:17:06.091 00:17:06.091 ' 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:06.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.091 --rc genhtml_branch_coverage=1 00:17:06.091 --rc genhtml_function_coverage=1 00:17:06.091 --rc genhtml_legend=1 00:17:06.091 --rc geninfo_all_blocks=1 00:17:06.091 --rc geninfo_unexecuted_blocks=1 00:17:06.091 00:17:06.091 ' 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:06.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.091 --rc genhtml_branch_coverage=1 00:17:06.091 --rc genhtml_function_coverage=1 00:17:06.091 --rc genhtml_legend=1 00:17:06.091 --rc geninfo_all_blocks=1 00:17:06.091 --rc geninfo_unexecuted_blocks=1 00:17:06.091 00:17:06.091 ' 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:06.091 01:37:04 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=101165 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:06.091 01:37:04 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 101165 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 101165 ']' 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.091 01:37:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.091 [2024-10-09 01:37:04.887099] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:06.091 [2024-10-09 01:37:04.887760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101165 ] 00:17:06.351 [2024-10-09 01:37:05.019986] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:06.351 [2024-10-09 01:37:05.049643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:06.351 [2024-10-09 01:37:05.124361] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.351 [2024-10-09 01:37:05.124443] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.922 01:37:05 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.922 01:37:05 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:17:06.922 01:37:05 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:06.922 01:37:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:06.922 01:37:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.922 01:37:05 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:06.922 01:37:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.922 01:37:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.922 01:37:05 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:06.922 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:06.922 ' 00:17:08.840 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:08.840 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:08.840 01:37:07 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:08.840 01:37:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:08.840 01:37:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.840 01:37:07 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:08.840 01:37:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:08.840 01:37:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:08.840 01:37:07 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:08.840 ' 00:17:09.781 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:09.781 01:37:08 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:09.781 01:37:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.781 01:37:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.781 01:37:08 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:09.781 01:37:08 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:09.781 01:37:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.781 01:37:08 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:09.781 01:37:08 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:10.350 01:37:09 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:10.350 01:37:09 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:10.350 01:37:09 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:10.350 01:37:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:10.350 01:37:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.350 01:37:09 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:10.350 01:37:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.350 01:37:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.350 01:37:09 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:10.350 ' 00:17:11.290 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:11.550 01:37:10 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:11.550 01:37:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:11.550 01:37:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.550 01:37:10 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:11.550 01:37:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:11.550 01:37:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.550 01:37:10 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:11.550 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:11.550 ' 00:17:12.935 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:12.935 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:12.935 01:37:11 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:12.935 01:37:11 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:12.935 01:37:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:12.935 01:37:11 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 101165 00:17:12.935 01:37:11 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 101165 ']' 00:17:12.935 01:37:11 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 101165 00:17:12.935 01:37:11 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:17:12.935 01:37:11 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.935 01:37:11 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101165 00:17:13.194 01:37:11 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:13.194 01:37:11 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:13.194 01:37:11 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101165' 00:17:13.194 killing process with pid 101165 00:17:13.194 01:37:11 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 101165 00:17:13.194 01:37:11 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 101165 00:17:13.764 01:37:12 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:13.764 01:37:12 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 101165 ']' 00:17:13.764 01:37:12 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 101165 00:17:13.764 Process with pid 101165 is not found 00:17:13.764 01:37:12 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 101165 ']' 00:17:13.764 01:37:12 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 101165 00:17:13.764 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (101165) - No such process 00:17:13.764 01:37:12 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 101165 is not found' 00:17:13.764 01:37:12 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:13.764 01:37:12 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:13.764 01:37:12 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:13.764 01:37:12 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:13.764 00:17:13.764 real 0m7.974s 00:17:13.764 user 0m16.481s 00:17:13.764 sys 0m1.250s 00:17:13.764 01:37:12 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.764 01:37:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.764 ************************************ 00:17:13.764 END TEST spdkcli_raid 00:17:13.764 ************************************ 00:17:13.764 01:37:12 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:13.764 01:37:12 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:13.764 01:37:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.764 01:37:12 -- common/autotest_common.sh@10 -- # set +x 00:17:13.764 ************************************ 00:17:13.764 START TEST blockdev_raid5f 00:17:13.764 ************************************ 00:17:13.764 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:14.025 * Looking for test storage... 00:17:14.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.025 01:37:12 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:14.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.025 --rc genhtml_branch_coverage=1 00:17:14.025 --rc genhtml_function_coverage=1 00:17:14.025 --rc genhtml_legend=1 00:17:14.025 --rc geninfo_all_blocks=1 00:17:14.025 --rc geninfo_unexecuted_blocks=1 00:17:14.025 00:17:14.025 ' 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:14.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.025 --rc genhtml_branch_coverage=1 00:17:14.025 --rc genhtml_function_coverage=1 00:17:14.025 --rc genhtml_legend=1 00:17:14.025 --rc geninfo_all_blocks=1 00:17:14.025 --rc geninfo_unexecuted_blocks=1 00:17:14.025 00:17:14.025 ' 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:14.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.025 --rc genhtml_branch_coverage=1 00:17:14.025 --rc genhtml_function_coverage=1 00:17:14.025 --rc genhtml_legend=1 00:17:14.025 --rc geninfo_all_blocks=1 00:17:14.025 --rc geninfo_unexecuted_blocks=1 00:17:14.025 00:17:14.025 ' 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:14.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.025 --rc genhtml_branch_coverage=1 00:17:14.025 --rc genhtml_function_coverage=1 00:17:14.025 --rc genhtml_legend=1 00:17:14.025 --rc geninfo_all_blocks=1 00:17:14.025 --rc geninfo_unexecuted_blocks=1 00:17:14.025 00:17:14.025 ' 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=101424 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:14.025 01:37:12 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 101424 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 101424 ']' 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.025 01:37:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:14.025 [2024-10-09 01:37:12.915784] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:14.025 [2024-10-09 01:37:12.916516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101424 ] 00:17:14.285 [2024-10-09 01:37:13.052257] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:14.285 [2024-10-09 01:37:13.080656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.285 [2024-10-09 01:37:13.154411] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.855 01:37:13 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.855 01:37:13 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:17:14.855 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:14.855 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:14.855 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:14.855 01:37:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.855 01:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:14.855 Malloc0 00:17:15.115 Malloc1 00:17:15.115 Malloc2 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "32ec6b9f-950b-4923-a13e-01181451b5ba"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "32ec6b9f-950b-4923-a13e-01181451b5ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "32ec6b9f-950b-4923-a13e-01181451b5ba",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "58cb0e91-fde4-429d-86db-4ce181f568a9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "05c1da10-72c6-4c03-bac3-5bf30583680f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fbccf19f-5e18-407c-bad9-d856a3a7e2bb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:15.115 01:37:13 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 101424 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 101424 ']' 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 101424 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101424 00:17:15.115 killing process with pid 101424 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101424' 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 101424 00:17:15.115 01:37:13 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 101424 00:17:16.055 01:37:14 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:16.055 01:37:14 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:16.055 01:37:14 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:16.055 01:37:14 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.055 01:37:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:16.055 ************************************ 00:17:16.055 START TEST bdev_hello_world 00:17:16.055 ************************************ 00:17:16.055 01:37:14 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:16.055 [2024-10-09 01:37:14.808819] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:16.055 [2024-10-09 01:37:14.808937] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101464 ] 00:17:16.055 [2024-10-09 01:37:14.939619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:16.315 [2024-10-09 01:37:14.968487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.315 [2024-10-09 01:37:15.038173] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.574 [2024-10-09 01:37:15.308707] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:16.574 [2024-10-09 01:37:15.308761] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:16.574 [2024-10-09 01:37:15.308785] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:16.574 [2024-10-09 01:37:15.309142] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:16.574 [2024-10-09 01:37:15.309284] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:16.575 [2024-10-09 01:37:15.309300] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:16.575 [2024-10-09 01:37:15.309349] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:16.575 00:17:16.575 [2024-10-09 01:37:15.309379] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:16.834 00:17:16.834 real 0m0.992s 00:17:16.834 user 0m0.556s 00:17:16.834 sys 0m0.318s 00:17:16.834 ************************************ 00:17:16.834 END TEST bdev_hello_world 00:17:16.834 ************************************ 00:17:16.834 01:37:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.834 01:37:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:17.094 01:37:15 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:17.094 01:37:15 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:17.094 01:37:15 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.094 01:37:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:17.094 ************************************ 00:17:17.094 START TEST bdev_bounds 00:17:17.094 ************************************ 00:17:17.094 Process bdevio pid: 101495 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=101495 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 101495' 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 101495 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 101495 ']' 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.094 01:37:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:17.094 [2024-10-09 01:37:15.880058] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:17.094 [2024-10-09 01:37:15.880309] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101495 ] 00:17:17.354 [2024-10-09 01:37:16.013660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:17.354 [2024-10-09 01:37:16.040460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.354 [2024-10-09 01:37:16.110240] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.354 [2024-10-09 01:37:16.110445] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.354 [2024-10-09 01:37:16.110493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.923 01:37:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:17.923 01:37:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:17.923 01:37:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:17.923 I/O targets: 00:17:17.923 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:17.923 00:17:17.923 00:17:17.923 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.923 http://cunit.sourceforge.net/ 00:17:17.923 00:17:17.923 00:17:17.923 Suite: bdevio tests on: raid5f 00:17:17.923 Test: blockdev write read block ...passed 00:17:17.923 Test: blockdev write zeroes read block ...passed 00:17:17.923 Test: blockdev write zeroes read no split ...passed 00:17:18.183 Test: blockdev write zeroes read split ...passed 00:17:18.183 Test: blockdev write zeroes read split partial ...passed 00:17:18.183 Test: blockdev reset ...passed 00:17:18.183 Test: blockdev write read 8 blocks ...passed 00:17:18.183 Test: blockdev write read size > 128k ...passed 00:17:18.183 Test: blockdev write read invalid size ...passed 00:17:18.183 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.183 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.183 Test: blockdev write read max offset ...passed 00:17:18.183 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.183 Test: blockdev writev readv 8 blocks ...passed 00:17:18.183 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.183 Test: blockdev writev readv block ...passed 00:17:18.183 Test: blockdev writev readv size > 128k ...passed 00:17:18.183 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.183 Test: blockdev comparev and writev ...passed 00:17:18.183 Test: blockdev nvme passthru rw ...passed 00:17:18.183 Test: blockdev nvme passthru vendor specific ...passed 00:17:18.183 Test: blockdev nvme admin passthru ...passed 00:17:18.183 Test: blockdev copy ...passed 00:17:18.183 00:17:18.183 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.183 suites 1 1 n/a 0 0 00:17:18.183 tests 23 23 23 0 0 00:17:18.183 asserts 130 130 130 0 n/a 00:17:18.183 00:17:18.183 Elapsed time = 0.351 seconds 00:17:18.183 0 00:17:18.183 01:37:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 101495 00:17:18.183 01:37:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 101495 ']' 00:17:18.183 01:37:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 101495 00:17:18.183 01:37:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:18.183 01:37:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.183 01:37:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101495 00:17:18.183 01:37:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.183 01:37:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.183 01:37:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101495' 00:17:18.183 killing process with pid 101495 00:17:18.183 01:37:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 101495 00:17:18.183 01:37:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 101495 00:17:18.754 01:37:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:18.754 00:17:18.754 real 0m1.631s 00:17:18.754 user 0m3.699s 00:17:18.754 sys 0m0.462s 00:17:18.754 01:37:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.754 01:37:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:18.754 ************************************ 00:17:18.754 END TEST bdev_bounds 00:17:18.754 ************************************ 00:17:18.754 01:37:17 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:18.754 01:37:17 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:18.754 01:37:17 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.754 01:37:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:18.754 ************************************ 00:17:18.754 START TEST bdev_nbd 00:17:18.754 ************************************ 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=101544 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 101544 /var/tmp/spdk-nbd.sock 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 101544 ']' 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.754 01:37:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:18.754 [2024-10-09 01:37:17.602447] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:18.754 [2024-10-09 01:37:17.602594] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.014 [2024-10-09 01:37:17.736454] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:19.014 [2024-10-09 01:37:17.765468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.014 [2024-10-09 01:37:17.836841] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:19.583 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:19.842 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.842 1+0 records in 00:17:19.843 1+0 records out 00:17:19.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418324 s, 9.8 MB/s 00:17:19.843 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.843 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:19.843 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.843 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:19.843 01:37:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:19.843 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:19.843 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:19.843 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:20.102 { 00:17:20.102 "nbd_device": "/dev/nbd0", 00:17:20.102 "bdev_name": "raid5f" 00:17:20.102 } 00:17:20.102 ]' 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:20.102 { 00:17:20.102 "nbd_device": "/dev/nbd0", 00:17:20.102 "bdev_name": "raid5f" 00:17:20.102 } 00:17:20.102 ]' 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.102 01:37:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.362 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:20.622 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:20.882 /dev/nbd0 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:20.882 1+0 records in 00:17:20.882 1+0 records out 00:17:20.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370984 s, 11.0 MB/s 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:20.882 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:21.142 { 00:17:21.142 "nbd_device": "/dev/nbd0", 00:17:21.142 "bdev_name": "raid5f" 00:17:21.142 } 00:17:21.142 ]' 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:21.142 { 00:17:21.142 "nbd_device": "/dev/nbd0", 00:17:21.142 "bdev_name": "raid5f" 00:17:21.142 } 00:17:21.142 ]' 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:21.142 256+0 records in 00:17:21.142 256+0 records out 00:17:21.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131947 s, 79.5 MB/s 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:21.142 256+0 records in 00:17:21.142 256+0 records out 00:17:21.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293175 s, 35.8 MB/s 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:21.142 01:37:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.402 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:21.662 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:21.921 malloc_lvol_verify 00:17:21.922 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:22.181 dc8b6d45-c5cc-4d3d-8c30-006a2b43127a 00:17:22.181 01:37:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:22.181 62dd0bd3-7dc0-4f1a-9fff-8e6daadb9465 00:17:22.181 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:22.444 /dev/nbd0 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:22.444 mke2fs 1.47.0 (5-Feb-2023) 00:17:22.444 Discarding device blocks: 0/4096 done 00:17:22.444 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:22.444 00:17:22.444 Allocating group tables: 0/1 done 00:17:22.444 Writing inode tables: 0/1 done 00:17:22.444 Creating journal (1024 blocks): done 00:17:22.444 Writing superblocks and filesystem accounting information: 0/1 done 00:17:22.444 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.444 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 101544 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 101544 ']' 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 101544 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 101544 00:17:22.715 killing process with pid 101544 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 101544' 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 101544 00:17:22.715 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 101544 00:17:23.318 01:37:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:23.318 00:17:23.318 real 0m4.478s 00:17:23.318 user 0m6.311s 00:17:23.318 sys 0m1.333s 00:17:23.318 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.318 ************************************ 00:17:23.318 END TEST bdev_nbd 00:17:23.318 ************************************ 00:17:23.318 01:37:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:23.318 01:37:22 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:23.318 01:37:22 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:23.318 01:37:22 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:23.318 01:37:22 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:23.318 01:37:22 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:23.318 01:37:22 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.318 01:37:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:23.318 ************************************ 00:17:23.318 START TEST bdev_fio 00:17:23.318 ************************************ 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:23.318 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:23.318 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:23.578 ************************************ 00:17:23.578 START TEST bdev_fio_rw_verify 00:17:23.578 ************************************ 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:23.578 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:23.579 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:23.579 01:37:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:23.579 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:23.579 fio-3.35 00:17:23.579 Starting 1 thread 00:17:35.799 00:17:35.799 job_raid5f: (groupid=0, jobs=1): err= 0: pid=101739: Wed Oct 9 01:37:33 2024 00:17:35.799 read: IOPS=12.4k, BW=48.6MiB/s (51.0MB/s)(486MiB/10001msec) 00:17:35.799 slat (nsec): min=16748, max=58542, avg=18544.23, stdev=2026.18 00:17:35.799 clat (usec): min=11, max=275, avg=129.44, stdev=44.09 00:17:35.799 lat (usec): min=29, max=294, avg=147.98, stdev=44.34 00:17:35.799 clat percentiles (usec): 00:17:35.799 | 50.000th=[ 133], 99.000th=[ 212], 99.900th=[ 235], 99.990th=[ 260], 00:17:35.799 | 99.999th=[ 273] 00:17:35.799 write: IOPS=13.1k, BW=51.0MiB/s (53.5MB/s)(504MiB/9876msec); 0 zone resets 00:17:35.799 slat (usec): min=7, max=292, avg=16.23, stdev= 3.74 00:17:35.799 clat (usec): min=55, max=1446, avg=296.45, stdev=42.14 00:17:35.799 lat (usec): min=70, max=1669, avg=312.68, stdev=43.28 00:17:35.799 clat percentiles (usec): 00:17:35.799 | 50.000th=[ 302], 99.000th=[ 371], 99.900th=[ 619], 99.990th=[ 1254], 00:17:35.799 | 99.999th=[ 1369] 00:17:35.799 bw ( KiB/s): min=48800, max=54792, per=98.84%, avg=51613.05, stdev=1483.15, samples=19 00:17:35.799 iops : min=12200, max=13698, avg=12903.26, stdev=370.79, samples=19 00:17:35.799 lat (usec) : 20=0.01%, 50=0.01%, 100=15.21%, 250=40.13%, 500=44.58% 00:17:35.799 lat (usec) : 750=0.05%, 1000=0.02% 00:17:35.799 lat (msec) : 2=0.01% 00:17:35.799 cpu : usr=98.92%, sys=0.42%, ctx=22, majf=0, minf=13273 00:17:35.799 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:35.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.799 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.799 issued rwts: total=124427,128931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.799 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:35.799 00:17:35.799 Run status group 0 (all jobs): 00:17:35.799 READ: bw=48.6MiB/s (51.0MB/s), 48.6MiB/s-48.6MiB/s (51.0MB/s-51.0MB/s), io=486MiB (510MB), run=10001-10001msec 00:17:35.799 WRITE: bw=51.0MiB/s (53.5MB/s), 51.0MiB/s-51.0MiB/s (53.5MB/s-53.5MB/s), io=504MiB (528MB), run=9876-9876msec 00:17:35.799 ----------------------------------------------------- 00:17:35.799 Suppressions used: 00:17:35.799 count bytes template 00:17:35.799 1 7 /usr/src/fio/parse.c 00:17:35.799 647 62112 /usr/src/fio/iolog.c 00:17:35.799 1 8 libtcmalloc_minimal.so 00:17:35.799 1 904 libcrypto.so 00:17:35.799 ----------------------------------------------------- 00:17:35.799 00:17:35.799 00:17:35.799 real 0m11.389s 00:17:35.799 user 0m11.577s 00:17:35.799 sys 0m0.700s 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:35.799 ************************************ 00:17:35.799 END TEST bdev_fio_rw_verify 00:17:35.799 ************************************ 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:35.799 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "32ec6b9f-950b-4923-a13e-01181451b5ba"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "32ec6b9f-950b-4923-a13e-01181451b5ba",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "32ec6b9f-950b-4923-a13e-01181451b5ba",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "58cb0e91-fde4-429d-86db-4ce181f568a9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "05c1da10-72c6-4c03-bac3-5bf30583680f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "fbccf19f-5e18-407c-bad9-d856a3a7e2bb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:35.800 /home/vagrant/spdk_repo/spdk 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:35.800 00:17:35.800 real 0m11.696s 00:17:35.800 user 0m11.698s 00:17:35.800 sys 0m0.849s 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.800 01:37:33 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:35.800 ************************************ 00:17:35.800 END TEST bdev_fio 00:17:35.800 ************************************ 00:17:35.800 01:37:33 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:35.800 01:37:33 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:35.800 01:37:33 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:35.800 01:37:33 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:35.800 01:37:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:35.800 ************************************ 00:17:35.800 START TEST bdev_verify 00:17:35.800 ************************************ 00:17:35.800 01:37:33 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:35.800 [2024-10-09 01:37:33.906705] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:35.800 [2024-10-09 01:37:33.906842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101888 ] 00:17:35.800 [2024-10-09 01:37:34.044999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:35.800 [2024-10-09 01:37:34.074719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:35.800 [2024-10-09 01:37:34.155478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.800 [2024-10-09 01:37:34.155923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.800 Running I/O for 5 seconds... 00:17:37.678 11062.00 IOPS, 43.21 MiB/s [2024-10-09T01:37:37.510Z] 11124.50 IOPS, 43.46 MiB/s [2024-10-09T01:37:38.449Z] 11183.00 IOPS, 43.68 MiB/s [2024-10-09T01:37:39.829Z] 11186.00 IOPS, 43.70 MiB/s [2024-10-09T01:37:39.830Z] 11186.20 IOPS, 43.70 MiB/s 00:17:40.937 Latency(us) 00:17:40.937 [2024-10-09T01:37:39.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.937 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:40.937 Verification LBA range: start 0x0 length 0x2000 00:17:40.937 raid5f : 5.01 6699.63 26.17 0.00 0.00 28736.67 213.31 21592.09 00:17:40.937 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:40.937 Verification LBA range: start 0x2000 length 0x2000 00:17:40.937 raid5f : 5.02 4508.39 17.61 0.00 0.00 42729.87 230.27 30160.38 00:17:40.937 [2024-10-09T01:37:39.830Z] =================================================================================================================== 00:17:40.937 [2024-10-09T01:37:39.830Z] Total : 11208.01 43.78 0.00 0.00 34368.37 213.31 30160.38 00:17:41.197 00:17:41.197 real 0m6.031s 00:17:41.197 user 0m11.045s 00:17:41.197 sys 0m0.349s 00:17:41.197 01:37:39 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:41.197 01:37:39 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:41.197 ************************************ 00:17:41.197 END TEST bdev_verify 00:17:41.197 ************************************ 00:17:41.197 01:37:39 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:41.197 01:37:39 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:41.197 01:37:39 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:41.197 01:37:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:41.197 ************************************ 00:17:41.197 START TEST bdev_verify_big_io 00:17:41.197 ************************************ 00:17:41.197 01:37:39 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:41.197 [2024-10-09 01:37:40.019282] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:41.197 [2024-10-09 01:37:40.019427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101975 ] 00:17:41.457 [2024-10-09 01:37:40.157481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:41.457 [2024-10-09 01:37:40.186941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:41.457 [2024-10-09 01:37:40.268790] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.457 [2024-10-09 01:37:40.268916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.717 Running I/O for 5 seconds... 00:17:43.667 633.00 IOPS, 39.56 MiB/s [2024-10-09T01:37:43.940Z] 761.00 IOPS, 47.56 MiB/s [2024-10-09T01:37:44.879Z] 803.00 IOPS, 50.19 MiB/s [2024-10-09T01:37:45.818Z] 793.25 IOPS, 49.58 MiB/s [2024-10-09T01:37:46.077Z] 812.00 IOPS, 50.75 MiB/s 00:17:47.184 Latency(us) 00:17:47.184 [2024-10-09T01:37:46.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.184 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:47.184 Verification LBA range: start 0x0 length 0x200 00:17:47.184 raid5f : 5.20 463.62 28.98 0.00 0.00 6890751.68 189.22 303431.74 00:17:47.184 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:47.184 Verification LBA range: start 0x200 length 0x200 00:17:47.184 raid5f : 5.30 359.23 22.45 0.00 0.00 8798787.32 207.07 380203.62 00:17:47.184 [2024-10-09T01:37:46.077Z] =================================================================================================================== 00:17:47.184 [2024-10-09T01:37:46.077Z] Total : 822.85 51.43 0.00 0.00 7732038.02 189.22 380203.62 00:17:47.460 00:17:47.460 real 0m6.323s 00:17:47.460 user 0m11.607s 00:17:47.460 sys 0m0.355s 00:17:47.460 01:37:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:47.460 01:37:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.460 ************************************ 00:17:47.460 END TEST bdev_verify_big_io 00:17:47.460 ************************************ 00:17:47.460 01:37:46 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:47.460 01:37:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:47.460 01:37:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:47.460 01:37:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:47.460 ************************************ 00:17:47.460 START TEST bdev_write_zeroes 00:17:47.460 ************************************ 00:17:47.460 01:37:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:47.734 [2024-10-09 01:37:46.416476] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:47.735 [2024-10-09 01:37:46.416638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102057 ] 00:17:47.735 [2024-10-09 01:37:46.552665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:47.735 [2024-10-09 01:37:46.582249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.994 [2024-10-09 01:37:46.653216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.254 Running I/O for 1 seconds... 00:17:49.192 30015.00 IOPS, 117.25 MiB/s 00:17:49.192 Latency(us) 00:17:49.192 [2024-10-09T01:37:48.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.192 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:49.192 raid5f : 1.01 30002.53 117.20 0.00 0.00 4253.27 1342.37 5769.32 00:17:49.192 [2024-10-09T01:37:48.085Z] =================================================================================================================== 00:17:49.192 [2024-10-09T01:37:48.085Z] Total : 30002.53 117.20 0.00 0.00 4253.27 1342.37 5769.32 00:17:49.452 00:17:49.452 real 0m2.012s 00:17:49.452 user 0m1.559s 00:17:49.452 sys 0m0.332s 00:17:49.452 01:37:48 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.452 01:37:48 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:49.452 ************************************ 00:17:49.452 END TEST bdev_write_zeroes 00:17:49.452 ************************************ 00:17:49.712 01:37:48 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.712 01:37:48 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:49.712 01:37:48 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:49.712 01:37:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:49.712 ************************************ 00:17:49.712 START TEST bdev_json_nonenclosed 00:17:49.712 ************************************ 00:17:49.712 01:37:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:49.712 [2024-10-09 01:37:48.505742] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:49.712 [2024-10-09 01:37:48.505861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102099 ] 00:17:49.973 [2024-10-09 01:37:48.642615] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:49.973 [2024-10-09 01:37:48.672473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.973 [2024-10-09 01:37:48.755629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.973 [2024-10-09 01:37:48.755746] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:49.973 [2024-10-09 01:37:48.755772] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:49.973 [2024-10-09 01:37:48.755783] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:50.233 00:17:50.233 real 0m0.502s 00:17:50.233 user 0m0.224s 00:17:50.233 sys 0m0.173s 00:17:50.233 01:37:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.233 01:37:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:50.233 ************************************ 00:17:50.233 END TEST bdev_json_nonenclosed 00:17:50.233 ************************************ 00:17:50.233 01:37:48 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:50.233 01:37:48 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:50.233 01:37:48 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.233 01:37:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:50.233 ************************************ 00:17:50.233 START TEST bdev_json_nonarray 00:17:50.233 ************************************ 00:17:50.233 01:37:48 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:50.233 [2024-10-09 01:37:49.070378] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:50.233 [2024-10-09 01:37:49.070512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102130 ] 00:17:50.502 [2024-10-09 01:37:49.202597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:50.502 [2024-10-09 01:37:49.230814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.502 [2024-10-09 01:37:49.312907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.502 [2024-10-09 01:37:49.313021] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:50.502 [2024-10-09 01:37:49.313046] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:50.502 [2024-10-09 01:37:49.313063] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:50.773 00:17:50.773 real 0m0.484s 00:17:50.773 user 0m0.218s 00:17:50.773 sys 0m0.162s 00:17:50.773 01:37:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.773 01:37:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:50.773 ************************************ 00:17:50.773 END TEST bdev_json_nonarray 00:17:50.773 ************************************ 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:50.773 01:37:49 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:50.773 00:17:50.773 real 0m36.982s 00:17:50.773 user 0m48.952s 00:17:50.773 sys 0m5.607s 00:17:50.773 01:37:49 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.773 01:37:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:50.773 ************************************ 00:17:50.773 END TEST blockdev_raid5f 00:17:50.773 ************************************ 00:17:50.773 01:37:49 -- spdk/autotest.sh@194 -- # uname -s 00:17:50.773 01:37:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:50.773 01:37:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:50.773 01:37:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:50.773 01:37:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:50.773 01:37:49 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:50.773 01:37:49 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:50.773 01:37:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:50.773 01:37:49 -- common/autotest_common.sh@10 -- # set +x 00:17:51.033 01:37:49 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:51.033 01:37:49 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:51.033 01:37:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:51.033 01:37:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:51.033 01:37:49 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:51.033 01:37:49 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:51.033 01:37:49 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:51.033 01:37:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.033 01:37:49 -- common/autotest_common.sh@10 -- # set +x 00:17:51.033 01:37:49 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:51.033 01:37:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:51.033 01:37:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:51.033 01:37:49 -- common/autotest_common.sh@10 -- # set +x 00:17:53.575 INFO: APP EXITING 00:17:53.575 INFO: killing all VMs 00:17:53.575 INFO: killing vhost app 00:17:53.575 INFO: EXIT DONE 00:17:53.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:53.575 Waiting for block devices as requested 00:17:53.835 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:53.835 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:54.775 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:54.775 Cleaning 00:17:54.775 Removing: /var/run/dpdk/spdk0/config 00:17:54.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:54.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:54.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:54.775 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:54.775 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:54.775 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:54.775 Removing: /dev/shm/spdk_tgt_trace.pid70090 00:17:54.775 Removing: /var/run/dpdk/spdk0 00:17:54.775 Removing: /var/run/dpdk/spdk_pid100181 00:17:54.775 Removing: /var/run/dpdk/spdk_pid100498 00:17:54.775 Removing: /var/run/dpdk/spdk_pid101165 00:17:54.775 Removing: /var/run/dpdk/spdk_pid101424 00:17:54.775 Removing: /var/run/dpdk/spdk_pid101464 00:17:54.775 Removing: /var/run/dpdk/spdk_pid101495 00:17:54.775 Removing: /var/run/dpdk/spdk_pid101724 00:17:54.775 Removing: /var/run/dpdk/spdk_pid101888 00:17:55.035 Removing: /var/run/dpdk/spdk_pid101975 00:17:55.035 Removing: /var/run/dpdk/spdk_pid102057 00:17:55.035 Removing: /var/run/dpdk/spdk_pid102099 00:17:55.035 Removing: /var/run/dpdk/spdk_pid102130 00:17:55.035 Removing: /var/run/dpdk/spdk_pid69921 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70090 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70292 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70385 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70413 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70525 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70543 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70731 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70811 00:17:55.035 Removing: /var/run/dpdk/spdk_pid70896 00:17:55.035 Removing: /var/run/dpdk/spdk_pid71002 00:17:55.035 Removing: /var/run/dpdk/spdk_pid71082 00:17:55.035 Removing: /var/run/dpdk/spdk_pid71127 00:17:55.035 Removing: /var/run/dpdk/spdk_pid71158 00:17:55.035 Removing: /var/run/dpdk/spdk_pid71234 00:17:55.036 Removing: /var/run/dpdk/spdk_pid71351 00:17:55.036 Removing: /var/run/dpdk/spdk_pid71776 00:17:55.036 Removing: /var/run/dpdk/spdk_pid71837 00:17:55.036 Removing: /var/run/dpdk/spdk_pid71883 00:17:55.036 Removing: /var/run/dpdk/spdk_pid71894 00:17:55.036 Removing: /var/run/dpdk/spdk_pid71974 00:17:55.036 Removing: /var/run/dpdk/spdk_pid71990 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72069 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72075 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72128 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72146 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72198 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72212 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72346 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72388 00:17:55.036 Removing: /var/run/dpdk/spdk_pid72475 00:17:55.036 Removing: /var/run/dpdk/spdk_pid73667 00:17:55.036 Removing: /var/run/dpdk/spdk_pid73862 00:17:55.036 Removing: /var/run/dpdk/spdk_pid73997 00:17:55.036 Removing: /var/run/dpdk/spdk_pid74607 00:17:55.036 Removing: /var/run/dpdk/spdk_pid74813 00:17:55.036 Removing: /var/run/dpdk/spdk_pid74942 00:17:55.036 Removing: /var/run/dpdk/spdk_pid75557 00:17:55.036 Removing: /var/run/dpdk/spdk_pid75877 00:17:55.036 Removing: /var/run/dpdk/spdk_pid76006 00:17:55.036 Removing: /var/run/dpdk/spdk_pid77352 00:17:55.036 Removing: /var/run/dpdk/spdk_pid77593 00:17:55.036 Removing: /var/run/dpdk/spdk_pid77723 00:17:55.036 Removing: /var/run/dpdk/spdk_pid79054 00:17:55.036 Removing: /var/run/dpdk/spdk_pid79301 00:17:55.036 Removing: /var/run/dpdk/spdk_pid79430 00:17:55.036 Removing: /var/run/dpdk/spdk_pid80776 00:17:55.036 Removing: /var/run/dpdk/spdk_pid81211 00:17:55.036 Removing: /var/run/dpdk/spdk_pid81340 00:17:55.296 Removing: /var/run/dpdk/spdk_pid82781 00:17:55.296 Removing: /var/run/dpdk/spdk_pid83029 00:17:55.296 Removing: /var/run/dpdk/spdk_pid83164 00:17:55.296 Removing: /var/run/dpdk/spdk_pid84598 00:17:55.296 Removing: /var/run/dpdk/spdk_pid84847 00:17:55.296 Removing: /var/run/dpdk/spdk_pid84983 00:17:55.296 Removing: /var/run/dpdk/spdk_pid86424 00:17:55.296 Removing: /var/run/dpdk/spdk_pid86896 00:17:55.296 Removing: /var/run/dpdk/spdk_pid87025 00:17:55.296 Removing: /var/run/dpdk/spdk_pid87157 00:17:55.296 Removing: /var/run/dpdk/spdk_pid87558 00:17:55.296 Removing: /var/run/dpdk/spdk_pid88279 00:17:55.296 Removing: /var/run/dpdk/spdk_pid88639 00:17:55.296 Removing: /var/run/dpdk/spdk_pid89337 00:17:55.296 Removing: /var/run/dpdk/spdk_pid89761 00:17:55.296 Removing: /var/run/dpdk/spdk_pid90491 00:17:55.296 Removing: /var/run/dpdk/spdk_pid90890 00:17:55.296 Removing: /var/run/dpdk/spdk_pid92810 00:17:55.296 Removing: /var/run/dpdk/spdk_pid93243 00:17:55.296 Removing: /var/run/dpdk/spdk_pid93661 00:17:55.296 Removing: /var/run/dpdk/spdk_pid95708 00:17:55.296 Removing: /var/run/dpdk/spdk_pid96182 00:17:55.296 Removing: /var/run/dpdk/spdk_pid96686 00:17:55.296 Removing: /var/run/dpdk/spdk_pid97721 00:17:55.296 Removing: /var/run/dpdk/spdk_pid98038 00:17:55.296 Removing: /var/run/dpdk/spdk_pid98953 00:17:55.296 Removing: /var/run/dpdk/spdk_pid99269 00:17:55.296 Clean 00:17:55.296 01:37:54 -- common/autotest_common.sh@1451 -- # return 0 00:17:55.296 01:37:54 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:55.296 01:37:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:55.296 01:37:54 -- common/autotest_common.sh@10 -- # set +x 00:17:55.556 01:37:54 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:55.556 01:37:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:55.556 01:37:54 -- common/autotest_common.sh@10 -- # set +x 00:17:55.556 01:37:54 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:55.556 01:37:54 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:55.556 01:37:54 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:55.556 01:37:54 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:55.556 01:37:54 -- spdk/autotest.sh@394 -- # hostname 00:17:55.556 01:37:54 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:55.816 geninfo: WARNING: invalid characters removed from testname! 00:18:22.385 01:38:18 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:22.385 01:38:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:24.294 01:38:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:26.830 01:38:25 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:28.735 01:38:27 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:31.284 01:38:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:33.202 01:38:31 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:33.202 01:38:32 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:18:33.202 01:38:32 -- common/autotest_common.sh@1681 -- $ lcov --version 00:18:33.202 01:38:32 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:18:33.463 01:38:32 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:18:33.463 01:38:32 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:18:33.463 01:38:32 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:33.463 01:38:32 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:33.463 01:38:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:18:33.463 01:38:32 -- scripts/common.sh@336 -- $ read -ra ver1 00:18:33.463 01:38:32 -- scripts/common.sh@337 -- $ IFS=.-: 00:18:33.463 01:38:32 -- scripts/common.sh@337 -- $ read -ra ver2 00:18:33.463 01:38:32 -- scripts/common.sh@338 -- $ local 'op=<' 00:18:33.463 01:38:32 -- scripts/common.sh@340 -- $ ver1_l=2 00:18:33.463 01:38:32 -- scripts/common.sh@341 -- $ ver2_l=1 00:18:33.463 01:38:32 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:33.463 01:38:32 -- scripts/common.sh@344 -- $ case "$op" in 00:18:33.463 01:38:32 -- scripts/common.sh@345 -- $ : 1 00:18:33.463 01:38:32 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:33.463 01:38:32 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.463 01:38:32 -- scripts/common.sh@365 -- $ decimal 1 00:18:33.463 01:38:32 -- scripts/common.sh@353 -- $ local d=1 00:18:33.463 01:38:32 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:18:33.463 01:38:32 -- scripts/common.sh@355 -- $ echo 1 00:18:33.463 01:38:32 -- scripts/common.sh@365 -- $ ver1[v]=1 00:18:33.463 01:38:32 -- scripts/common.sh@366 -- $ decimal 2 00:18:33.463 01:38:32 -- scripts/common.sh@353 -- $ local d=2 00:18:33.463 01:38:32 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:18:33.463 01:38:32 -- scripts/common.sh@355 -- $ echo 2 00:18:33.463 01:38:32 -- scripts/common.sh@366 -- $ ver2[v]=2 00:18:33.463 01:38:32 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:33.463 01:38:32 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:33.463 01:38:32 -- scripts/common.sh@368 -- $ return 0 00:18:33.463 01:38:32 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.463 01:38:32 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:18:33.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.463 --rc genhtml_branch_coverage=1 00:18:33.463 --rc genhtml_function_coverage=1 00:18:33.463 --rc genhtml_legend=1 00:18:33.463 --rc geninfo_all_blocks=1 00:18:33.463 --rc geninfo_unexecuted_blocks=1 00:18:33.463 00:18:33.463 ' 00:18:33.463 01:38:32 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:18:33.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.464 --rc genhtml_branch_coverage=1 00:18:33.464 --rc genhtml_function_coverage=1 00:18:33.464 --rc genhtml_legend=1 00:18:33.464 --rc geninfo_all_blocks=1 00:18:33.464 --rc geninfo_unexecuted_blocks=1 00:18:33.464 00:18:33.464 ' 00:18:33.464 01:38:32 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:18:33.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.464 --rc genhtml_branch_coverage=1 00:18:33.464 --rc genhtml_function_coverage=1 00:18:33.464 --rc genhtml_legend=1 00:18:33.464 --rc geninfo_all_blocks=1 00:18:33.464 --rc geninfo_unexecuted_blocks=1 00:18:33.464 00:18:33.464 ' 00:18:33.464 01:38:32 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:18:33.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.464 --rc genhtml_branch_coverage=1 00:18:33.464 --rc genhtml_function_coverage=1 00:18:33.464 --rc genhtml_legend=1 00:18:33.464 --rc geninfo_all_blocks=1 00:18:33.464 --rc geninfo_unexecuted_blocks=1 00:18:33.464 00:18:33.464 ' 00:18:33.464 01:38:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:33.464 01:38:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:33.464 01:38:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:33.464 01:38:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.464 01:38:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.464 01:38:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.464 01:38:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.464 01:38:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.464 01:38:32 -- paths/export.sh@5 -- $ export PATH 00:18:33.464 01:38:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.464 01:38:32 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:33.464 01:38:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:18:33.464 01:38:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728437912.XXXXXX 00:18:33.464 01:38:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728437912.Sd49jR 00:18:33.464 01:38:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:18:33.464 01:38:32 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:18:33.464 01:38:32 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:33.464 01:38:32 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:33.464 01:38:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:33.464 01:38:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:33.464 01:38:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:18:33.464 01:38:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:33.464 01:38:32 -- common/autotest_common.sh@10 -- $ set +x 00:18:33.464 01:38:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:33.464 01:38:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:18:33.464 01:38:32 -- pm/common@17 -- $ local monitor 00:18:33.464 01:38:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:33.464 01:38:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:33.464 01:38:32 -- pm/common@25 -- $ sleep 1 00:18:33.464 01:38:32 -- pm/common@21 -- $ date +%s 00:18:33.464 01:38:32 -- pm/common@21 -- $ date +%s 00:18:33.464 01:38:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728437912 00:18:33.464 01:38:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728437912 00:18:33.464 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728437912_collect-vmstat.pm.log 00:18:33.464 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728437912_collect-cpu-load.pm.log 00:18:34.405 01:38:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:18:34.405 01:38:33 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:18:34.405 01:38:33 -- spdk/autopackage.sh@14 -- $ timing_finish 00:18:34.405 01:38:33 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:34.405 01:38:33 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:34.405 01:38:33 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:34.405 01:38:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:34.405 01:38:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:34.405 01:38:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:34.405 01:38:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:34.405 01:38:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:34.405 01:38:33 -- pm/common@44 -- $ pid=103644 00:18:34.405 01:38:33 -- pm/common@50 -- $ kill -TERM 103644 00:18:34.405 01:38:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:34.405 01:38:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:34.405 01:38:33 -- pm/common@44 -- $ pid=103646 00:18:34.405 01:38:33 -- pm/common@50 -- $ kill -TERM 103646 00:18:34.665 + [[ -n 6159 ]] 00:18:34.665 + sudo kill 6159 00:18:34.676 [Pipeline] } 00:18:34.691 [Pipeline] // timeout 00:18:34.696 [Pipeline] } 00:18:34.711 [Pipeline] // stage 00:18:34.716 [Pipeline] } 00:18:34.730 [Pipeline] // catchError 00:18:34.739 [Pipeline] stage 00:18:34.741 [Pipeline] { (Stop VM) 00:18:34.754 [Pipeline] sh 00:18:35.037 + vagrant halt 00:18:36.947 ==> default: Halting domain... 00:18:45.095 [Pipeline] sh 00:18:45.379 + vagrant destroy -f 00:18:47.919 ==> default: Removing domain... 00:18:47.932 [Pipeline] sh 00:18:48.217 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:48.227 [Pipeline] } 00:18:48.242 [Pipeline] // stage 00:18:48.247 [Pipeline] } 00:18:48.261 [Pipeline] // dir 00:18:48.266 [Pipeline] } 00:18:48.280 [Pipeline] // wrap 00:18:48.287 [Pipeline] } 00:18:48.300 [Pipeline] // catchError 00:18:48.309 [Pipeline] stage 00:18:48.311 [Pipeline] { (Epilogue) 00:18:48.324 [Pipeline] sh 00:18:48.609 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:52.851 [Pipeline] catchError 00:18:52.853 [Pipeline] { 00:18:52.866 [Pipeline] sh 00:18:53.152 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:53.152 Artifacts sizes are good 00:18:53.162 [Pipeline] } 00:18:53.176 [Pipeline] // catchError 00:18:53.265 [Pipeline] archiveArtifacts 00:18:53.273 Archiving artifacts 00:18:53.371 [Pipeline] cleanWs 00:18:53.382 [WS-CLEANUP] Deleting project workspace... 00:18:53.382 [WS-CLEANUP] Deferred wipeout is used... 00:18:53.389 [WS-CLEANUP] done 00:18:53.391 [Pipeline] } 00:18:53.405 [Pipeline] // stage 00:18:53.409 [Pipeline] } 00:18:53.422 [Pipeline] // node 00:18:53.426 [Pipeline] End of Pipeline 00:18:53.466 Finished: SUCCESS